instance_id
stringlengths 12
57
| base_commit
stringlengths 40
40
| created_at
stringdate 2015-01-06 14:05:07
2025-04-29 17:56:51
| environment_setup_commit
stringlengths 40
40
| hints_text
stringlengths 0
158k
| patch
stringlengths 261
20.8k
| problem_statement
stringlengths 11
52.5k
| repo
stringlengths 7
53
| test_patch
stringlengths 280
206k
| meta
dict | version
stringclasses 463
values | install_config
dict | requirements
stringlengths 93
34k
⌀ | environment
stringlengths 772
20k
⌀ | FAIL_TO_PASS
sequencelengths 1
856
| FAIL_TO_FAIL
sequencelengths 0
536
| PASS_TO_PASS
sequencelengths 0
7.87k
| PASS_TO_FAIL
sequencelengths 0
92
| license_name
stringclasses 35
values | __index_level_0__
int64 11
21.4k
| num_tokens_patch
int64 103
4.99k
| before_filepaths
sequencelengths 0
14
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
repobee__repobee-363 | 47fe1df45f8b1805337090591fa372ae77c9fca2 | 2019-09-01 07:49:36 | f5adada7aa91cfa67735957d606b34d7d14a4118 | codecov-io: # [Codecov](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=h1) Report
> Merging [#363](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=desc) into [master](https://codecov.io/gh/repobee/repobee/commit/47fe1df45f8b1805337090591fa372ae77c9fca2?src=pr&el=desc) will **decrease** coverage by `0.2%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #363 +/- ##
=========================================
- Coverage 87.81% 87.6% -0.21%
=========================================
Files 21 21
Lines 1608 1581 -27
Branches 292 289 -3
=========================================
- Hits 1412 1385 -27
Misses 172 172
Partials 24 24
```
| [Impacted Files](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/\_repobee/cli.py](https://codecov.io/gh/repobee/repobee/pull/363/diff?src=pr&el=tree#diff-c3JjL19yZXBvYmVlL2NsaS5weQ==) | `98.09% <100%> (-0.16%)` | :arrow_down: |
| [src/\_repobee/main.py](https://codecov.io/gh/repobee/repobee/pull/363/diff?src=pr&el=tree#diff-c3JjL19yZXBvYmVlL21haW4ucHk=) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=footer). Last update [47fe1df...a392f06](https://codecov.io/gh/repobee/repobee/pull/363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/src/_repobee/cli.py b/src/_repobee/cli.py
index fbb2143..5256e34 100644
--- a/src/_repobee/cli.py
+++ b/src/_repobee/cli.py
@@ -14,7 +14,6 @@ import pathlib
import os
import sys
import re
-from contextlib import contextmanager
from typing import List, Iterable, Optional, Tuple
import logging
@@ -283,37 +282,28 @@ def dispatch_command(
hook_results = {}
if ext_command_names and args.subparser in ext_command_names:
ext_cmd = ext_commands[ext_command_names.index(args.subparser)]
- with _sys_exit_on_expected_error():
- ext_cmd.callback(args, api)
+ ext_cmd.callback(args, api)
elif args.subparser == SETUP_PARSER:
- with _sys_exit_on_expected_error():
- command.setup_student_repos(
- args.master_repo_urls, args.students, api
- )
+ command.setup_student_repos(args.master_repo_urls, args.students, api)
elif args.subparser == UPDATE_PARSER:
- with _sys_exit_on_expected_error():
- command.update_student_repos(
- args.master_repo_urls, args.students, api, issue=args.issue
- )
+ command.update_student_repos(
+ args.master_repo_urls, args.students, api, issue=args.issue
+ )
elif args.subparser == OPEN_ISSUE_PARSER:
- with _sys_exit_on_expected_error():
- command.open_issue(
- args.issue, args.master_repo_names, args.students, api
- )
+ command.open_issue(
+ args.issue, args.master_repo_names, args.students, api
+ )
elif args.subparser == CLOSE_ISSUE_PARSER:
- with _sys_exit_on_expected_error():
- command.close_issue(
- args.title_regex, args.master_repo_names, args.students, api
- )
+ command.close_issue(
+ args.title_regex, args.master_repo_names, args.students, api
+ )
elif args.subparser == MIGRATE_PARSER:
- with _sys_exit_on_expected_error():
- command.migrate_repos(args.master_repo_urls, api)
+ command.migrate_repos(args.master_repo_urls, api)
elif args.subparser == CLONE_PARSER:
- with _sys_exit_on_expected_error():
- hook_results = command.clone_repos(
- args.master_repo_names, args.students, api
- )
- LOGGER.info(formatters.format_hook_results_output(hook_results))
+ hook_results = command.clone_repos(
+ args.master_repo_names, args.students, api
+ )
+ LOGGER.info(formatters.format_hook_results_output(hook_results))
elif args.subparser == VERIFY_PARSER:
plug.manager.hook.get_api_class().verify_settings(
args.user,
@@ -323,42 +313,35 @@ def dispatch_command(
args.master_org_name,
)
elif args.subparser == LIST_ISSUES_PARSER:
- with _sys_exit_on_expected_error():
- hook_results = command.list_issues(
- args.master_repo_names,
- args.students,
- api,
- state=args.state,
- title_regex=args.title_regex or "",
- show_body=args.show_body,
- author=args.author,
- )
+ hook_results = command.list_issues(
+ args.master_repo_names,
+ args.students,
+ api,
+ state=args.state,
+ title_regex=args.title_regex or "",
+ show_body=args.show_body,
+ author=args.author,
+ )
elif args.subparser == ASSIGN_REVIEWS_PARSER:
- with _sys_exit_on_expected_error():
- command.assign_peer_reviews(
- args.master_repo_names,
- args.students,
- args.num_reviews,
- args.issue,
- api,
- )
+ command.assign_peer_reviews(
+ args.master_repo_names,
+ args.students,
+ args.num_reviews,
+ args.issue,
+ api,
+ )
elif args.subparser == PURGE_REVIEW_TEAMS_PARSER:
- with _sys_exit_on_expected_error():
- command.purge_review_teams(
- args.master_repo_names, args.students, api
- )
+ command.purge_review_teams(args.master_repo_names, args.students, api)
elif args.subparser == SHOW_CONFIG_PARSER:
- with _sys_exit_on_expected_error():
- command.show_config()
+ command.show_config()
elif args.subparser == CHECK_REVIEW_PROGRESS_PARSER:
- with _sys_exit_on_expected_error():
- command.check_peer_review_progress(
- args.master_repo_names,
- args.students,
- args.title_regex,
- args.num_reviews,
- api,
- )
+ command.check_peer_review_progress(
+ args.master_repo_names,
+ args.students,
+ args.title_regex,
+ args.num_reviews,
+ api,
+ )
else:
raise exception.ParseError(
"Illegal value for subparser: {}. "
@@ -1006,30 +989,6 @@ def _add_traceback_arg(parser):
)
-@contextmanager
-def _sys_exit_on_expected_error():
- try:
- yield
- except exception.PushFailedError as exc:
- LOGGER.error(
- "There was an error pushing to {}. "
- "Verify that your token has adequate access.".format(exc.url)
- )
- sys.exit(1)
- except exception.CloneFailedError as exc:
- LOGGER.error(
- "There was an error cloning from {}. "
- "Does the repo really exist?".format(exc.url)
- )
- sys.exit(1)
- except exception.GitError:
- LOGGER.error("Something went wrong with git. See the logs for info.")
- sys.exit(1)
- except exception.APIError as exc:
- LOGGER.error("Exiting beacuse of {.__class__.__name__}".format(exc))
- sys.exit(1)
-
-
def _extract_groups(args: argparse.Namespace) -> List[str]:
"""Extract groups from args namespace.`
diff --git a/src/_repobee/main.py b/src/_repobee/main.py
index 6aec08f..946bb44 100644
--- a/src/_repobee/main.py
+++ b/src/_repobee/main.py
@@ -91,6 +91,7 @@ def main(sys_args: List[str]):
LOGGER.exception("Critical exception")
else:
LOGGER.error("{.__class__.__name__}: {}".format(exc, str(exc)))
+ sys.exit(1)
if __name__ == "__main__":
| Remove _sys_exit_on_expected_error context manager in cli
It just removes information, I have no idea of why I put it there in the first place. | repobee/repobee | diff --git a/tests/unit_tests/test_cli.py b/tests/unit_tests/test_cli.py
index 3f21414..8b8724d 100644
--- a/tests/unit_tests/test_cli.py
+++ b/tests/unit_tests/test_cli.py
@@ -267,20 +267,6 @@ class TestDispatchCommand:
that there are no crashes, does not validate any other behavior!"""
cli.dispatch_command(parsed_args_all_subparsers, api_instance_mock)
- def test_expected_exception_results_in_system_exit(
- self,
- parsed_args_all_subparsers,
- api_instance_mock,
- command_all_raise_mock,
- ):
- """Test that any of the expected exceptions results in SystemExit."""
- if parsed_args_all_subparsers.subparser == cli.VERIFY_PARSER:
- pytest.skip(
- "verify-settings parser is exempt from the sysexit behavior"
- )
- with pytest.raises(SystemExit):
- cli.dispatch_command(parsed_args_all_subparsers, api_instance_mock)
-
def test_setup_student_repos_called_with_correct_args(
self, command_mock, api_instance_mock
):
diff --git a/tests/unit_tests/test_main.py b/tests/unit_tests/test_main.py
index e750823..7bd11c5 100644
--- a/tests/unit_tests/test_main.py
+++ b/tests/unit_tests/test_main.py
@@ -97,23 +97,24 @@ def test_happy_path(
)
-def test_does_not_raise_on_exception_in_parsing(
+def test_exit_status_1_on_exception_in_parsing(
api_instance_mock, parse_args_mock, dispatch_command_mock, no_config_mock
):
- """should just log, but not raise."""
msg = "some nice error message"
parse_args_mock.side_effect = raise_(Exception(msg))
sys_args = ["just some made up arguments".split()]
- main.main(sys_args)
+ with pytest.raises(SystemExit) as exc_info:
+ main.main(sys_args)
+ assert exc_info.value.code == 1
parse_args_mock.assert_called_once_with(
sys_args[1:], show_all_opts=False, ext_commands=DEFAULT_EXT_COMMANDS
)
assert not dispatch_command_mock.called
-def test_does_not_raise_on_exception_in_handling_parsed_args(
+def test_exit_status_1_on_exception_in_handling_parsed_args(
api_instance_mock, parse_args_mock, dispatch_command_mock
):
"""should just log, but not raise."""
@@ -121,7 +122,13 @@ def test_does_not_raise_on_exception_in_handling_parsed_args(
dispatch_command_mock.side_effect = raise_(Exception(msg))
sys_args = ["just some made up arguments".split()]
- main.main(sys_args)
+ with pytest.raises(SystemExit) as exc_info:
+ main.main(sys_args)
+
+ assert exc_info.value.code == 1
+ parse_args_mock.assert_called_once_with(
+ sys_args[1:], show_all_opts=False, ext_commands=DEFAULT_EXT_COMMANDS
+ )
def test_plugins_args(
@@ -271,8 +278,10 @@ def test_logs_traceback_on_exception_in_dispatch_if_traceback(
sys_args = ["repobee", *CLONE_ARGS, "--traceback"]
dispatch_command_mock.side_effect = lambda *args, **kwargs: raise_()
- main.main(sys_args)
+ with pytest.raises(SystemExit) as exc_info:
+ main.main(sys_args)
+ assert exc_info.value.code == 1
assert logger_exception_mock.called
parse_args_mock.assert_called_once_with(
[*CLONE_ARGS, "--traceback"],
@@ -312,3 +321,19 @@ def test_show_all_opts_correctly_separated(
parse_preparser_options_mock.assert_called_once_with(
[cli.PRE_PARSER_SHOW_ALL_OPTS]
)
+
+
+def test_non_zero_exit_status_on_exception(
+ parse_args_mock, parse_preparser_options_mock, no_config_mock
+):
+ def raise_(*args, **kwargs):
+ raise ValueError()
+
+ parse_args_mock.side_effect = raise_
+
+ sys_args = ["repobee", *CLONE_ARGS]
+
+ with pytest.raises(SystemExit) as exc_info:
+ main.main(sys_args)
+
+ assert exc_info.value.code == 1
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 2
} | 2.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[TEST]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | appdirs==1.4.4
backports.zoneinfo==0.2.1
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
codecov==2.1.13
colored==1.4.4
coverage==7.2.7
cryptography==44.0.2
daiquiri==3.2.1
dateparser==1.2.0
Deprecated==1.2.18
exceptiongroup==1.2.2
humanize==4.6.0
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
maya==0.6.1
packaging==24.0
pendulum==2.1.2
pluggy==1.2.0
pycparser==2.21
PyGithub==2.3.0
PyJWT==2.8.0
PyNaCl==1.5.0
pytest==7.4.4
pytest-cov==4.1.0
pytest-mock==3.11.1
python-dateutil==2.9.0.post0
python-gitlab==1.9.0
python-json-logger==3.0.1
pytz==2025.2
pytzdata==2020.1
regex==2024.4.16
-e git+https://github.com/repobee/repobee.git@47fe1df45f8b1805337090591fa372ae77c9fca2#egg=repobee
repobee-plug==0.10.0
requests==2.31.0
six==1.17.0
snaptime==0.2.4
tomli==2.0.1
typing_extensions==4.7.1
tzlocal==5.1
urllib3==2.0.7
wrapt==1.16.0
zipp==3.15.0
| name: repobee
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- appdirs==1.4.4
- backports-zoneinfo==0.2.1
- cffi==1.15.1
- charset-normalizer==3.4.1
- codecov==2.1.13
- colored==1.4.4
- coverage==7.2.7
- cryptography==44.0.2
- daiquiri==3.2.1
- dateparser==1.2.0
- deprecated==1.2.18
- exceptiongroup==1.2.2
- humanize==4.6.0
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- maya==0.6.1
- packaging==24.0
- pendulum==2.1.2
- pluggy==1.2.0
- pycparser==2.21
- pygithub==2.3.0
- pyjwt==2.8.0
- pynacl==1.5.0
- pytest==7.4.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- python-dateutil==2.9.0.post0
- python-gitlab==1.9.0
- python-json-logger==3.0.1
- pytz==2025.2
- pytzdata==2020.1
- regex==2024.4.16
- repobee==2.2.0
- repobee-plug==0.10.0
- requests==2.31.0
- six==1.17.0
- snaptime==0.2.4
- tomli==2.0.1
- typing-extensions==4.7.1
- tzlocal==5.1
- urllib3==2.0.7
- wrapt==1.16.0
- zipp==3.15.0
prefix: /opt/conda/envs/repobee
| [
"tests/unit_tests/test_main.py::test_exit_status_1_on_exception_in_parsing",
"tests/unit_tests/test_main.py::test_exit_status_1_on_exception_in_handling_parsed_args",
"tests/unit_tests/test_main.py::test_logs_traceback_on_exception_in_dispatch_if_traceback",
"tests/unit_tests/test_main.py::test_non_zero_exit_status_on_exception"
] | [] | [
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_writes_hook_results_correctly[clone-_repobee.command.clone_repos]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_writes_hook_results_correctly[list-issues-_repobee.command.list_issues]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_does_not_write_hook_results_if_there_are_none",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_raises_on_invalid_subparser_value",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[setup]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[update]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[clone]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[migrate]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[open-issues]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[close-issues]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[list-issues]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[verify-settings]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[assign-reviews]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[end-reviews]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[show-config]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_no_crash_on_valid_args[check-reviews]",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_setup_student_repos_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_update_student_repos_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_open_issue_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_close_issue_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_migrate_repos_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_clone_repos_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_verify_settings_called_with_correct_args",
"tests/unit_tests/test_cli.py::TestDispatchCommand::test_verify_settings_called_with_master_org_name",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[setup]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[update]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[clone]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[migrate]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[open-issues]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[close-issues]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[list-issues]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[verify-settings]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[assign-reviews]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[end-reviews]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[show-config]",
"tests/unit_tests/test_cli.py::test_help_calls_add_arguments[check-reviews]",
"tests/unit_tests/test_cli.py::test_create_parser_for_docs",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_show_all_opts_true_shows_configured_args",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_show_all_opts_false_hides_configured_args",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_invalid_org",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_bad_credentials",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_invalid_base_url",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_master_org_overrides_target_org_for_master_repos[setup]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_master_org_overrides_target_org_for_master_repos[update]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_master_org_name_defaults_to_org_name[setup]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_master_org_name_defaults_to_org_name[update]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_token_env_variable_picked_up[setup]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_token_env_variable_picked_up[update]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_token_cli_arg_picked_up[setup]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_token_cli_arg_picked_up[update]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_non_tls_api_url[http://some_enterprise_host/api/v3]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_non_tls_api_url[ftp://some_enterprise_host/api/v3]",
"tests/unit_tests/test_cli.py::TestBaseParsing::test_raises_on_non_tls_api_url[some_enterprise_host/api/v3]",
"tests/unit_tests/test_cli.py::TestExtensionCommands::test_parse_ext_command_that_does_not_require_api",
"tests/unit_tests/test_cli.py::TestExtensionCommands::test_parse_ext_command_that_requires_api",
"tests/unit_tests/test_cli.py::TestExtensionCommands::test_dispatch_ext_command_that_does_not_require_api",
"tests/unit_tests/test_cli.py::TestExtensionCommands::test_dispatch_ext_command_that_requires_api",
"tests/unit_tests/test_cli.py::TestExtensionCommands::test_parse_ext_command_that_requires_base_parsers",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_students_file_is_not_a_file[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_students_file_is_not_a_file[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_students_file_is_not_a_file[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_students_file_is_not_a_file[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_listing_students[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_listing_students[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_listing_students[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_listing_students[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_student_file[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_student_file[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_student_file[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parser_student_file[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_parsers_raise_on_empty_student_file[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_parsers_raise_on_empty_student_file[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_parsers_raise_on_empty_student_file[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_parsers_raise_on_empty_student_file[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parsers_raise_if_both_file_and_listing[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parsers_raise_if_both_file_and_listing[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parsers_raise_if_both_file_and_listing[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_parsers_raise_if_both_file_and_listing[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_groups_parsed_correcly[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_groups_parsed_correcly[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_groups_parsed_correcly[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_student_groups_parsed_correcly[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_generated_team_name_too_long[setup|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_generated_team_name_too_long[update|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_generated_team_name_too_long[close-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestStudentParsing::test_raises_if_generated_team_name_too_long[open-issues|['--mn',",
"tests/unit_tests/test_cli.py::TestConfig::test_full_config[setup-extra_args0]",
"tests/unit_tests/test_cli.py::TestConfig::test_full_config[update-extra_args1]",
"tests/unit_tests/test_cli.py::TestConfig::test_full_config[open-issues-extra_args2]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_is_required[--bu]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_is_required[-u]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_is_required[--sf]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_is_required[-o]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_is_required[--mo]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_can_be_specified[--bu]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_can_be_specified[-u]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_can_be_specified[--sf]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_can_be_specified[-o]",
"tests/unit_tests/test_cli.py::TestConfig::test_missing_option_can_be_specified[--mo]",
"tests/unit_tests/test_cli.py::TestSetupAndUpdateParsers::test_happy_path[setup]",
"tests/unit_tests/test_cli.py::TestSetupAndUpdateParsers::test_happy_path[update]",
"tests/unit_tests/test_cli.py::TestSetupAndUpdateParsers::test_finds_local_repo[setup]",
"tests/unit_tests/test_cli.py::TestSetupAndUpdateParsers::test_finds_local_repo[update]",
"tests/unit_tests/test_cli.py::TestMigrateParser::test_happy_path",
"tests/unit_tests/test_cli.py::TestVerifyParser::test_happy_path",
"tests/unit_tests/test_cli.py::TestCloneParser::test_happy_path",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[setup-extra_args0]",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[update-extra_args1]",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[open-issues-extra_args2]",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[close-issues-extra_args3]",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[verify-settings-extra_args4]",
"tests/unit_tests/test_cli.py::TestCloneParser::test_no_other_parser_gets_parse_hook[migrate-extra_args5]",
"tests/unit_tests/test_cli.py::TestShowConfigParser::test_happy_path",
"tests/unit_tests/test_cli.py::TestCommandDeprecation::test_deprecated_commands_parsed_to_current_commands[assign-peer-reviews-assign-reviews-sys_args0]",
"tests/unit_tests/test_cli.py::TestCommandDeprecation::test_deprecated_commands_parsed_to_current_commands[purge-peer-review-teams-end-reviews-sys_args1]",
"tests/unit_tests/test_cli.py::TestCommandDeprecation::test_deprecated_commands_parsed_to_current_commands[check-peer-review-progress-check-reviews-sys_args2]",
"tests/unit_tests/test_main.py::test_happy_path",
"tests/unit_tests/test_main.py::test_plugins_args",
"tests/unit_tests/test_main.py::test_no_plugins_arg",
"tests/unit_tests/test_main.py::test_no_plugins_with_configured_plugins",
"tests/unit_tests/test_main.py::test_configured_plugins_are_loaded",
"tests/unit_tests/test_main.py::test_plugin_with_subparser_name",
"tests/unit_tests/test_main.py::test_plug_arg_incompatible_with_no_plugins",
"tests/unit_tests/test_main.py::test_invalid_plug_options",
"tests/unit_tests/test_main.py::test_does_not_raise_on_exception_in_dispatch",
"tests/unit_tests/test_main.py::test_show_all_opts_correctly_separated"
] | [] | MIT License | 5,310 | 1,474 | [
"src/_repobee/cli.py",
"src/_repobee/main.py"
] |
dwavesystems__dwave-cloud-client-325 | 0364541298402a481562060759d3716abd220c6a | 2019-09-02 19:58:13 | 0835a006e6a12bea18ff470d068ab0203be1d719 | diff --git a/dwave/cloud/client.py b/dwave/cloud/client.py
index ff37d7e..886eb16 100644
--- a/dwave/cloud/client.py
+++ b/dwave/cloud/client.py
@@ -1030,18 +1030,22 @@ class Client(object):
def _handle_problem_status(self, message, future):
"""Handle the results of a problem submission or results request.
- This method checks the status of the problem and puts it in the correct queue.
+ This method checks the status of the problem and puts it in the correct
+ queue.
Args:
- message (dict): Update message from the SAPI server wrt. this problem.
- future `Future`: future corresponding to the problem
+ message (dict):
+ Update message from the SAPI server wrt. this problem.
+ future (:class:`dwave.cloud.computation.Future`:
+ future corresponding to the problem
Note:
This method is always run inside of a daemon thread.
"""
try:
logger.trace("Handling response: %r", message)
- logger.debug("Handling response for %s with status %s", message.get('id'), message.get('status'))
+ logger.debug("Handling response for %s with status %s",
+ message.get('id'), message.get('status'))
# Handle errors in batch mode
if 'error_code' in message and 'error_msg' in message:
@@ -1273,6 +1277,7 @@ class Client(object):
else:
logger.trace("Skipping non-positive delay of %.2f sec", delay)
+ # execute and handle the polling request
try:
logger.trace("Executing poll API request")
@@ -1283,11 +1288,27 @@ class Client(object):
if response.status_code == 401:
raise SolverAuthenticationError()
- response.raise_for_status()
- statuses = response.json()
- for status in statuses:
- self._handle_problem_status(status, frame_futures[status['id']])
+ # assume 5xx errors are transient, and don't abort polling
+ if 500 <= response.status_code < 600:
+ logger.warning(
+ "Received an internal server error response on "
+ "problem status polling request (%s). Assuming "
+ "error is transient, and resuming polling.",
+ response.status_code)
+ # add all futures in this frame back to the polling queue
+ # XXX: logic split between `_handle_problem_status` and here
+ for future in frame_futures.values():
+ self._poll(future)
+
+ else:
+ # otherwise, fail
+ response.raise_for_status()
+
+ # or handle a successful request
+ statuses = response.json()
+ for status in statuses:
+ self._handle_problem_status(status, frame_futures[status['id']])
except BaseException as exception:
if not isinstance(exception, SolverAuthenticationError):
| Assume 5xx responses during problem polling are transient errors, don't abort polling
Although this can be incorrect, as a good first approximation we can assume server-side errors (5xx) are temporary, meaning we should not stop polling, but rather retry later.
At the same time we're assuming client-side errors (4xx) are unrecoverable by retrying. | dwavesystems/dwave-cloud-client | diff --git a/tests/test_mock_submission.py b/tests/test_mock_submission.py
index eb7ae73..1b6ecb2 100644
--- a/tests/test_mock_submission.py
+++ b/tests/test_mock_submission.py
@@ -21,11 +21,13 @@ import json
import unittest
import itertools
import threading
+import collections
from datetime import datetime, timedelta
from dateutil.tz import UTC
from dateutil.parser import parse as parse_datetime
from requests.structures import CaseInsensitiveDict
+from requests.exceptions import HTTPError
from dwave.cloud.utils import evaluate_ising, generate_random_ising_problem
from dwave.cloud.client import Client, Solver
@@ -149,17 +151,24 @@ def continue_reply(id_, solver_name, now=None, eta_min=None, eta_max=None):
return json.dumps(resp)
-def choose_reply(path, replies, date=None):
+def choose_reply(path, replies, statuses=None, date=None):
"""Choose the right response based on the path and make a mock response."""
+ if statuses is None:
+ statuses = collections.defaultdict(lambda: iter([200]))
+
if date is None:
date = datetime_in_future(0)
if path in replies:
response = mock.Mock(['json', 'raise_for_status', 'headers'])
- response.status_code = 200
+ response.status_code = next(statuses[path])
response.json.side_effect = lambda: json.loads(replies[path])
response.headers = CaseInsensitiveDict({'Date': date.isoformat()})
+ def raise_for_status():
+ if not 200 <= response.status_code < 300:
+ raise HTTPError(response.status_code)
+ response.raise_for_status = raise_for_status
return response
else:
raise NotImplementedError(path)
@@ -475,6 +484,47 @@ class MockSubmission(_QueryTest):
future = solver.sample_qubo({})
future.result()
+ # Reduce the number of poll and submission threads so that the system can be tested
+ @mock.patch.object(Client, "_POLL_THREAD_COUNT", 1)
+ @mock.patch.object(Client, "_SUBMISSION_THREAD_COUNT", 1)
+ def test_polling_recovery_after_5xx(self):
+ "Polling shouldn't be aborted on 5xx responses."
+
+ with Client('endpoint', 'token') as client:
+ client.session = mock.Mock()
+ # on submit, return status pending
+ client.session.post = lambda path, _: choose_reply(path, {
+ 'endpoint/problems/': '[%s]' % continue_reply('123', 'abc123')
+ })
+ # on first and second status poll, fail with 503 and 504
+ # on third status poll, return completed
+ statuses = iter([503, 504])
+ def continue_then_complete(path, state={'count': 0}):
+ state['count'] += 1
+ if state['count'] < 3:
+ return choose_reply(path, replies={
+ 'endpoint/problems/?id=123': '[%s]' % continue_reply('123', 'abc123'),
+ 'endpoint/problems/123/': continue_reply('123', 'abc123')
+ }, statuses={
+ 'endpoint/problems/?id=123': statuses,
+ 'endpoint/problems/123/': statuses
+ })
+ else:
+ return choose_reply(path, {
+ 'endpoint/problems/?id=123': '[%s]' % complete_no_answer_reply('123', 'abc123'),
+ 'endpoint/problems/123/': complete_reply('123', 'abc123')
+ })
+
+ client.session.get = continue_then_complete
+
+ solver = Solver(client, solver_data('abc123'))
+
+ future = solver.sample_qubo({})
+ future.result()
+
+ # after third poll, back-off interval should be 4 x initial back-off
+ self.assertEqual(future._poll_backoff, Client._POLL_BACKOFF_MIN * 2**2)
+
class DeleteEvent(Exception):
"""Throws exception when mocked client submits an HTTP DELETE request."""
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
dimod==0.10.10
-e git+https://github.com/dwavesystems/dwave-cloud-client.git@0364541298402a481562060759d3716abd220c6a#egg=dwave_cloud_client
execnet==1.9.0
homebase==1.0.1
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
mock==5.2.0
numpy==1.19.5
packaging==21.3
plucky==0.4.3
pluggy==1.0.0
py==1.11.0
pyparsing==2.4.7
PySocks==1.7.1
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
python-dateutil==2.9.0.post0
requests==2.27.1
requests-mock==1.12.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: dwave-cloud-client
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- dimod==0.10.10
- execnet==1.9.0
- homebase==1.0.1
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- mock==5.2.0
- numpy==1.19.5
- packaging==21.3
- plucky==0.4.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==2.4.7
- pysocks==1.7.1
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- python-dateutil==2.9.0.post0
- requests==2.27.1
- requests-mock==1.12.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/dwave-cloud-client
| [
"tests/test_mock_submission.py::MockSubmission::test_polling_recovery_after_5xx"
] | [] | [
"tests/test_mock_submission.py::MockSubmission::test_eta_min_is_ignored_on_first_poll",
"tests/test_mock_submission.py::MockSubmission::test_exponential_backoff_polling",
"tests/test_mock_submission.py::MockSubmission::test_immediate_polling_with_local_clock_unsynced",
"tests/test_mock_submission.py::MockSubmission::test_immediate_polling_without_eta_min",
"tests/test_mock_submission.py::MockSubmission::test_submit_cancel_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_continue_then_error_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_continue_then_ok_and_error_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_continue_then_ok_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_error_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_immediate_error_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_null_reply",
"tests/test_mock_submission.py::MockSubmission::test_submit_ok_reply",
"tests/test_mock_submission.py::MockCancel::test_cancel_with_id",
"tests/test_mock_submission.py::MockCancel::test_cancel_without_id"
] | [] | Apache License 2.0 | 5,316 | 677 | [
"dwave/cloud/client.py"
] |
|
encode__httpx-310 | b8c5e7a8528978e646c28a5837df5085ea3803fc | 2019-09-03 13:41:11 | a05ba2e9148c3f74c80c68a727b7e52b3d751c8c | diff --git a/httpx/middleware.py b/httpx/middleware.py
index 4ed750e..aa994db 100644
--- a/httpx/middleware.py
+++ b/httpx/middleware.py
@@ -88,7 +88,7 @@ class RedirectMiddleware(BaseMiddleware):
) -> AsyncRequest:
method = self.redirect_method(request, response)
url = self.redirect_url(request, response)
- headers = self.redirect_headers(request, url) # TODO: merge headers?
+ headers = self.redirect_headers(request, url, method) # TODO: merge headers?
content = self.redirect_content(request, method)
cookies = Cookies(self.cookies)
cookies.update(request.cookies)
@@ -138,15 +138,24 @@ class RedirectMiddleware(BaseMiddleware):
return url
- def redirect_headers(self, request: AsyncRequest, url: URL) -> Headers:
+ def redirect_headers(self, request: AsyncRequest, url: URL, method: str) -> Headers:
"""
- Strip Authorization headers when responses are redirected away from
- the origin.
+ Return the headers that should be used for the redirect request.
"""
headers = Headers(request.headers)
+
if url.origin != request.url.origin:
+ # Strip Authorization headers when responses are redirected away from
+ # the origin.
del headers["Authorization"]
- del headers["host"]
+ del headers["Host"]
+
+ if method != request.method and method == "GET":
+ # If we've switch to a 'GET' request, then strip any headers which
+ # are only relevant to the request body.
+ del headers["Content-Length"]
+ del headers["Transfer-Encoding"]
+
return headers
def redirect_content(self, request: AsyncRequest, method: str) -> bytes:
| h11._util.LocalProtocolError: Too little data for declared Content-Length
client POC:
```python
import asyncio
import logging
import httpx
logging.basicConfig(level=logging.DEBUG)
async def test():
client = httpx.AsyncClient()
url = 'http://127.0.0.1:8000/debug'
resp = await client.post(url, data='debug', allow_redirects=True)
print(resp.content)
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(test())
```
app POC:
```python
from starlette.responses import RedirectResponse
from starlette.applications import Starlette
import uvicorn
app = Starlette(debug=True)
@app.route('/debug', methods=['POST', 'GET'])
async def debug(request):
return RedirectResponse(url='https://httpbin.org/headers', status_code=302)
if __name__ == '__main__':
uvicorn.run(app)
```
log:
```
DEBUG:asyncio:Using selector: KqueueSelector
DEBUG:httpx.dispatch.connection_pool:new_connection connection=HTTPConnection(origin=Origin(scheme='http' host='127.0.0.1' port=8000))
DEBUG:httpx.dispatch.connection:start_connect host='127.0.0.1' port=8000 timeout=TimeoutConfig(timeout=5.0)
DEBUG:httpx.dispatch.connection:connected http_version='HTTP/1.1'
DEBUG:httpx.dispatch.http11:send_headers method='POST' target='/debug' headers=Headers({'host': '127.0.0.1:8000', 'user-agent': 'python-httpx/0.7.2', 'accept': '*/*', 'content-length': '5', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive'})
DEBUG:httpx.dispatch.http11:receive_event event=NEED_DATA
DEBUG:httpx.dispatch.http11:send_data data=Data(<5 bytes>)
DEBUG:httpx.dispatch.http11:receive_event event=Response(status_code=302, headers=[(b'date', b'Tue, 03 Sep 2019 03:50:53 GMT'), (b'server', b'uvicorn'), (b'location', b'https://httpbin.org/headers'), (b'transfer-encoding', b'chunked')], http_version=b'1.1', reason=b'Found')
DEBUG:httpx.dispatch.http11:receive_event event=EndOfMessage(headers=[])
DEBUG:httpx.dispatch.http11:response_closed our_state=DONE their_state=DONE
DEBUG:httpx.dispatch.connection_pool:release_connection connection=HTTPConnection(origin=Origin(scheme='http' host='127.0.0.1' port=8000))
DEBUG:httpx.dispatch.connection_pool:new_connection connection=HTTPConnection(origin=Origin(scheme='https' host='httpbin.org' port=443))
DEBUG:httpx.dispatch.connection:start_connect host='httpbin.org' port=443 timeout=TimeoutConfig(timeout=5.0)
DEBUG:httpx.dispatch.connection:connected http_version='HTTP/1.1'
DEBUG:httpx.dispatch.http11:send_headers method='GET' target='/headers' headers=Headers({'host': 'httpbin.org', 'user-agent': 'python-httpx/0.7.2', 'accept': '*/*', 'content-length': '5', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive'})
DEBUG:httpx.dispatch.http11:receive_event event=NEED_DATA
Traceback (most recent call last):
File "http3/httpx/dispatch/http11.py", line 53, in send
http_version, status_code, headers = await self._receive_response(timeout)
File "http3/httpx/dispatch/http11.py", line 133, in _receive_response
event = await self._receive_event(timeout)
File "http3/httpx/dispatch/http11.py", line 174, in _receive_event
self.READ_NUM_BYTES, timeout, flag=self.timeout_flag
File "http3/httpx/concurrency/asyncio.py", line 92, in read
raise ReadTimeout() from None
http3.httpx.exceptions.ReadTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_proto.py", line 16, in <module>
asyncio.get_event_loop().run_until_complete(test())
File "/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "test_proto.py", line 11, in test
resp = await client.post(url, data='debug', allow_redirects=True)
File "http3/httpx/client.py", line 415, in post
trust_env=trust_env,
File "http3/httpx/client.py", line 566, in request
trust_env=trust_env,
File "http3/httpx/client.py", line 237, in send
return await get_response(request)
File "http3/httpx/middleware.py", line 72, in __call__
return await self(next_request, get_response)
File "http3/httpx/middleware.py", line 62, in __call__
response = await get_response(request)
File "http3/httpx/client.py", line 202, in get_response
request, verify=verify, cert=cert, timeout=timeout
File "http3/httpx/dispatch/connection_pool.py", line 126, in send
raise exc
File "http3/httpx/dispatch/connection_pool.py", line 121, in send
request, verify=verify, cert=cert, timeout=timeout
File "http3/httpx/dispatch/connection.py", line 65, in send
response = await self.h11_connection.send(request, timeout=timeout)
File "http3/httpx/dispatch/http11.py", line 53, in send
http_version, status_code, headers = await self._receive_response(timeout)
File "http3/httpx/concurrency/asyncio.py", line 285, in __aexit__
await self.task
File "http3/httpx/dispatch/http11.py", line 108, in _send_request_data
await self._send_event(event, timeout)
File "http3/httpx/dispatch/http11.py", line 123, in _send_event
bytes_to_send = self.h11_state.send(event)
File "site-packages/h11/_connection.py", line 469, in send
data_list = self.send_with_data_passthrough(event)
File "site-packages/h11/_connection.py", line 502, in send_with_data_passthrough
writer(event, data_list.append)
File "site-packages/h11/_writers.py", line 79, in __call__
self.send_eom(event.headers, write)
File "site-packages/h11/_writers.py", line 102, in send_eom
raise LocalProtocolError("Too little data for declared Content-Length")
h11._util.LocalProtocolError: Too little data for declared Content-Length
```
| encode/httpx | diff --git a/tests/client/test_redirects.py b/tests/client/test_redirects.py
index 7daf6c8..d732ce4 100644
--- a/tests/client/test_redirects.py
+++ b/tests/client/test_redirects.py
@@ -85,9 +85,15 @@ class MockDispatch(AsyncDispatcher):
codes.PERMANENT_REDIRECT, headers=headers, request=request
)
+ elif request.url.path == "/redirect_no_body":
+ await request.read()
+ headers = {"location": "/redirect_body_target"}
+ return AsyncResponse(codes.SEE_OTHER, headers=headers, request=request)
+
elif request.url.path == "/redirect_body_target":
content = await request.read()
- body = json.dumps({"body": content.decode()}).encode()
+ headers = dict(request.headers.items())
+ body = json.dumps({"body": content.decode(), "headers": headers}).encode()
return AsyncResponse(codes.OK, content=body, request=request)
elif request.url.path == "/cross_subdomain":
@@ -243,12 +249,29 @@ async def test_same_domain_redirect(backend):
async def test_body_redirect(backend):
+ """
+ A 308 redirect should preserve the request body.
+ """
client = AsyncClient(dispatch=MockDispatch(), backend=backend)
url = "https://example.org/redirect_body"
data = b"Example request body"
response = await client.post(url, data=data)
assert response.url == URL("https://example.org/redirect_body_target")
- assert response.json() == {"body": "Example request body"}
+ assert response.json()["body"] == "Example request body"
+ assert "content-length" in response.json()["headers"]
+
+
+async def test_no_body_redirect(backend):
+ """
+ A 303 redirect should remove the request body.
+ """
+ client = AsyncClient(dispatch=MockDispatch(), backend=backend)
+ url = "https://example.org/redirect_no_body"
+ data = b"Example request body"
+ response = await client.post(url, data=data)
+ assert response.url == URL("https://example.org/redirect_body_target")
+ assert response.json()["body"] == ""
+ assert "content-length" not in response.json()["headers"]
async def test_cannot_redirect_streaming_body(backend):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 1
} | 0.7 | {
"env_vars": null,
"env_yml_path": [],
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [],
"python": "3.9",
"reqs_path": [
"test-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==25.3.0
brotlipy==0.7.0
certifi==2025.1.31
cffi==1.17.1
chardet==3.0.4
click==8.1.8
coverage==7.8.0
cryptography==44.0.2
exceptiongroup==1.2.2
flake8==7.2.0
flake8-bugbear==24.12.12
flake8-comprehensions==3.16.0
h11==0.8.1
h2==3.2.0
hpack==3.0.0
hstspreload==2025.1.1
-e git+https://github.com/encode/httpx.git@b8c5e7a8528978e646c28a5837df5085ea3803fc#egg=httpx
hyperframe==5.2.0
idna==2.10
iniconfig==2.1.0
isort==6.0.1
mccabe==0.7.0
mypy==1.15.0
mypy-extensions==1.0.0
packaging==24.2
pluggy==1.5.0
pycodestyle==2.13.0
pycparser==2.22
pyflakes==3.3.2
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-cov==6.0.0
rfc3986==1.5.0
tomli==2.2.1
trustme==1.2.1
typing_extensions==4.13.0
uvicorn==0.34.0
| name: httpx
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==25.3.0
- brotlipy==0.7.0
- certifi==2025.1.31
- cffi==1.17.1
- chardet==3.0.4
- click==8.1.8
- coverage==7.8.0
- cryptography==44.0.2
- exceptiongroup==1.2.2
- flake8==7.2.0
- flake8-bugbear==24.12.12
- flake8-comprehensions==3.16.0
- h11==0.8.1
- h2==3.2.0
- hpack==3.0.0
- hstspreload==2025.1.1
- hyperframe==5.2.0
- idna==2.10
- iniconfig==2.1.0
- isort==6.0.1
- mccabe==0.7.0
- mypy==1.15.0
- mypy-extensions==1.0.0
- packaging==24.2
- pluggy==1.5.0
- pycodestyle==2.13.0
- pycparser==2.22
- pyflakes==3.3.2
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- rfc3986==1.5.0
- tomli==2.2.1
- trustme==1.2.1
- typing-extensions==4.13.0
- uvicorn==0.34.0
prefix: /opt/conda/envs/httpx
| [
"tests/client/test_redirects.py::test_no_body_redirect[AsyncioBackend]"
] | [] | [
"tests/client/test_redirects.py::test_no_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_redirect_301[AsyncioBackend]",
"tests/client/test_redirects.py::test_redirect_302[AsyncioBackend]",
"tests/client/test_redirects.py::test_redirect_303[AsyncioBackend]",
"tests/client/test_redirects.py::test_disallow_redirects[AsyncioBackend]",
"tests/client/test_redirects.py::test_relative_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_no_scheme_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_fragment_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_multiple_redirects[AsyncioBackend]",
"tests/client/test_redirects.py::test_too_many_redirects[AsyncioBackend]",
"tests/client/test_redirects.py::test_too_many_redirects_calling_next[AsyncioBackend]",
"tests/client/test_redirects.py::test_redirect_loop[AsyncioBackend]",
"tests/client/test_redirects.py::test_redirect_loop_calling_next[AsyncioBackend]",
"tests/client/test_redirects.py::test_cross_domain_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_same_domain_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_body_redirect[AsyncioBackend]",
"tests/client/test_redirects.py::test_cannot_redirect_streaming_body[AsyncioBackend]",
"tests/client/test_redirects.py::test_cross_subdomain_redirect[AsyncioBackend]"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,323 | 403 | [
"httpx/middleware.py"
] |
|
peterbe__hashin-114 | 10a7048202c27d05befd8c2391e3701069d467ef | 2019-09-04 00:57:59 | 10a7048202c27d05befd8c2391e3701069d467ef | peterbe: Note that I edited the PR title and PR description so GitHub would know to automatically close the issue when this PR lands. | diff --git a/hashin.py b/hashin.py
index a053ca1..515e037 100755
--- a/hashin.py
+++ b/hashin.py
@@ -387,43 +387,45 @@ def amend_requirements_content(requirements, all_new_lines):
padding = " " * 4
- def is_different_lines(package, new_lines):
- # This assumes that for sure the package is already mentioned in the old
- # requirements. Now we just need to double-check that they really are
+ def is_different_lines(old_lines, new_lines, indent):
+ # This assumes that the package is already mentioned in the old
+ # requirements. Now we just need to double-check that its lines are
# different.
# The 'new_lines` is what we might intend to replace it with.
- lines = set()
- for line in requirements.splitlines():
- if regex.search(line):
- lines.add(line.strip(" \\"))
- elif lines and line.startswith(padding):
- lines.add(line.strip(" \\"))
- elif lines:
- break
- return lines != set([x.strip(" \\") for x in new_lines.splitlines()])
-
- for package, old_name, new_lines in all_new_lines:
+ old = set([l.strip(" \\") for l in old_lines])
+ new = set([indent + x.strip(" \\") for x in new_lines])
+ return old != new
+
+ for package, old_name, new_text in all_new_lines:
regex = re.compile(
- r"^{0}(\[.*\])?==".format(re.escape(old_name)), re.IGNORECASE | re.MULTILINE
+ r"^(?P<indent>[ \t]*){0}(\[.*\])?==".format(re.escape(old_name)),
+ re.IGNORECASE | re.MULTILINE,
)
# if the package wasn't already there, add it to the bottom
- if not regex.search(requirements):
+ match = regex.search(requirements)
+ if not match:
# easy peasy
if requirements:
requirements = requirements.strip() + "\n"
- requirements += new_lines.strip() + "\n"
- elif is_different_lines(package, new_lines):
- # need to replace the existing
+ requirements += new_text.strip() + "\n"
+ else:
+ indent = match.group("indent")
lines = []
for line in requirements.splitlines():
if regex.search(line):
lines.append(line)
- elif lines and line.startswith(padding):
+ elif lines and line.startswith(indent + padding):
lines.append(line)
elif lines:
break
- combined = "\n".join(lines + [""])
- requirements = requirements.replace(combined, new_lines)
+ if is_different_lines(lines, new_text.splitlines(), indent):
+ # need to replace the existing
+ combined = "\n".join(lines + [""])
+ # indent non-empty lines
+ indented = re.sub(
+ r"^(.+)$", r"{0}\1".format(indent), new_text, flags=re.MULTILINE
+ )
+ requirements = requirements.replace(combined, indented)
return requirements
| Handle indentation
Would be nice if `hashin` could handle indented files like this one:
https://github.com/liberapay/liberapay.com/blob/master/requirements_base.txt
```
boto3==1.9.85 \
--hash=sha256:acfd27967cf1ba7f9d83ad6fc2011764541e4c295fe0d896ea7b495cc2f03336 \
--hash=sha256:96296871863e0245b04931df7dd5c583e53cadbe1d54197829b34b03b0d048a8
botocore==1.12.85 \
--hash=sha256:af727d4af0cf1ddbf84eaf1cc9d0160ff066eac7f9e6a2fe6a75ccbed4452c98 \
--hash=sha256:c381fd05b777f41a608ea0846a8d8ecc32077a83e456d05e824cce8d6b213e32
```
Right now it just inserts a new entry below:
```
$ hashin -r ../../../requirements_base.txt botocore -d
--- Old
+++ New
@@ -258,3 +258,6 @@
asn1crypto==0.24.0 \
--hash=sha256:2f1adbb7546ed199e3c90ef23ec95c5cf3585bac7d11fb7eb562a3fe89c64e87 \
--hash=sha256:9d5c20441baf0cb60a4ac34cc447c6c189024b6b4c6cd7877034f4965c464e49
+botocore==1.12.221 \
+ --hash=sha256:6d49deff062d2ae0f03fc26b56df8b1bb9e8b136657bcd8d84c986a4068fb784 \
+ --hash=sha256:bbee3fdcbe56ca53e2c32c6c12d174fa9b4ffe27b633183c29bd5aec9e200bae
``` | peterbe/hashin | diff --git a/tests/test_cli.py b/tests/test_cli.py
index 8c60ba0..6bf9cc6 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -579,6 +579,46 @@ selenium==2.53.1 \
assert new_lines[2] in result
+def test_amend_requirements_content_indented():
+ """This test came from https://github.com/peterbe/hashin/issues/112"""
+ requirements = (
+ """
+boto3==1.9.85 \\
+ --hash=sha256:acfd27967cf1ba7f9d83ad6fc2011764541e4c295fe0d896ea7b495cc2f03336 \\
+ --hash=sha256:96296871863e0245b04931df7dd5c583e53cadbe1d54197829b34b03b0d048a8
+
+ botocore==1.12.85 \\
+ --hash=sha256:af727d4af0cf1ddbf84eaf1cc9d0160ff066eac7f9e6a2fe6a75ccbed4452c98 \\
+ --hash=sha256:c381fd05b777f41a608ea0846a8d8ecc32077a83e456d05e824cce8d6b213e32
+ """.strip()
+ + "\n"
+ )
+ expect = (
+ """
+boto3==1.9.85 \\
+ --hash=sha256:acfd27967cf1ba7f9d83ad6fc2011764541e4c295fe0d896ea7b495cc2f03336 \\
+ --hash=sha256:96296871863e0245b04931df7dd5c583e53cadbe1d54197829b34b03b0d048a8
+
+ botocore==1.12.221 \\
+ --hash=sha256:6d49deff062d2ae0f03fc26b56df8b1bb9e8b136657bcd8d84c986a4068fb784 \\
+ --hash=sha256:bbee3fdcbe56ca53e2c32c6c12d174fa9b4ffe27b633183c29bd5aec9e200bae
+ """.strip()
+ + "\n"
+ )
+ new_lines = (
+ "botocore",
+ "botocore",
+ """
+botocore==1.12.221 \\
+ --hash=sha256:6d49deff062d2ae0f03fc26b56df8b1bb9e8b136657bcd8d84c986a4068fb784 \\
+ --hash=sha256:bbee3fdcbe56ca53e2c32c6c12d174fa9b4ffe27b633183c29bd5aec9e200bae
+ """.strip()
+ + "\n",
+ )
+ result = hashin.amend_requirements_content(requirements, [new_lines])
+ assert result == expect
+
+
def test_run(murlopen, tmpfile, capsys):
def mocked_get(url, **options):
if url == "https://pypi.org/pypi/hashin/json":
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"mock"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | backports.tarfile==1.2.0
cachetools==5.5.2
certifi==2025.1.31
cffi==1.17.1
chardet==5.2.0
charset-normalizer==3.4.1
colorama==0.4.6
cryptography==44.0.2
distlib==0.3.9
docutils==0.21.2
exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
filelock==3.18.0
-e git+https://github.com/peterbe/hashin.git@10a7048202c27d05befd8c2391e3701069d467ef#egg=hashin
id==1.5.0
idna==3.10
importlib_metadata==8.6.1
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.1.0
jeepney==0.9.0
keyring==25.6.0
markdown-it-py==3.0.0
mdurl==0.1.2
mock==5.2.0
more-itertools==10.6.0
nh3==0.2.21
packaging @ file:///croot/packaging_1734472117206/work
pip-api==0.0.34
platformdirs==4.3.7
pluggy @ file:///croot/pluggy_1733169602837/work
pycparser==2.22
Pygments==2.19.1
pyproject-api==1.9.0
pytest @ file:///croot/pytest_1738938843180/work
readme_renderer==44.0
requests==2.32.3
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==14.0.0
SecretStorage==3.3.3
tomli==2.2.1
tox==4.25.0
twine==6.1.0
typing_extensions==4.13.0
urllib3==2.3.0
virtualenv==20.29.3
zipp==3.21.0
| name: hashin
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- backports-tarfile==1.2.0
- cachetools==5.5.2
- certifi==2025.1.31
- cffi==1.17.1
- chardet==5.2.0
- charset-normalizer==3.4.1
- colorama==0.4.6
- cryptography==44.0.2
- distlib==0.3.9
- docutils==0.21.2
- filelock==3.18.0
- id==1.5.0
- idna==3.10
- importlib-metadata==8.6.1
- jaraco-classes==3.4.0
- jaraco-context==6.0.1
- jaraco-functools==4.1.0
- jeepney==0.9.0
- keyring==25.6.0
- markdown-it-py==3.0.0
- mdurl==0.1.2
- mock==5.2.0
- more-itertools==10.6.0
- nh3==0.2.21
- pip-api==0.0.34
- platformdirs==4.3.7
- pycparser==2.22
- pygments==2.19.1
- pyproject-api==1.9.0
- readme-renderer==44.0
- requests==2.32.3
- requests-toolbelt==1.0.0
- rfc3986==2.0.0
- rich==14.0.0
- secretstorage==3.3.3
- tomli==2.2.1
- tox==4.25.0
- twine==6.1.0
- typing-extensions==4.13.0
- urllib3==2.3.0
- virtualenv==20.29.3
- zipp==3.21.0
prefix: /opt/conda/envs/hashin
| [
"tests/test_cli.py::test_amend_requirements_content_indented"
] | [] | [
"tests/test_cli.py::test_get_latest_version_simple",
"tests/test_cli.py::test_get_latest_version_only_pre_release",
"tests/test_cli.py::test_get_latest_version_non_pre_release_leading_zeros",
"tests/test_cli.py::test_get_hashes_error",
"tests/test_cli.py::test_non_200_ok_download",
"tests/test_cli.py::test_main_packageerrors_stderr",
"tests/test_cli.py::test_packages_and_update_all",
"tests/test_cli.py::test_packages_and_update_all_with_requirements_file",
"tests/test_cli.py::test_no_packages_and_not_update_all",
"tests/test_cli.py::test_interactive_not_update_all",
"tests/test_cli.py::test_main_version",
"tests/test_cli.py::test_amend_requirements_content_new",
"tests/test_cli.py::test_amend_requirements_different_old_name",
"tests/test_cli.py::test_amend_requirements_content_multiple_merge",
"tests/test_cli.py::test_amend_requirements_content_replacement",
"tests/test_cli.py::test_amend_requirements_content_actually_not_replacement",
"tests/test_cli.py::test_amend_requirements_content_replacement_addition",
"tests/test_cli.py::test_amend_requirements_content_replacement_single_to_multi",
"tests/test_cli.py::test_amend_requirements_content_replacement_2",
"tests/test_cli.py::test_amend_requirements_content_replacement_amongst_others",
"tests/test_cli.py::test_amend_requirements_content_replacement_amongst_others_2",
"tests/test_cli.py::test_amend_requirements_content_new_similar_name",
"tests/test_cli.py::test_run",
"tests/test_cli.py::test_canonical_list_of_hashes",
"tests/test_cli.py::test_run_atomic_not_write_with_error_on_last_package",
"tests/test_cli.py::test_run_interactive",
"tests/test_cli.py::test_run_interactive_quit_and_accept_all",
"tests/test_cli.py::test_run_interactive_case_redirect",
"tests/test_cli.py::test_run_without_specific_version",
"tests/test_cli.py::test_run_with_alternate_index_url",
"tests/test_cli.py::test_run_contained_names",
"tests/test_cli.py::test_run_case_insensitive",
"tests/test_cli.py::test_run_update_all",
"tests/test_cli.py::test_run_comments_with_package_spec_patterns",
"tests/test_cli.py::test_run_dry",
"tests/test_cli.py::test_run_dry_multiple_packages",
"tests/test_cli.py::test_run_pep_0496",
"tests/test_cli.py::test_filter_releases",
"tests/test_cli.py::test_release_url_metadata_python",
"tests/test_cli.py::test_expand_python_version",
"tests/test_cli.py::test_get_package_hashes",
"tests/test_cli.py::test_get_package_hashes_from_alternate_index_url",
"tests/test_cli.py::test_get_package_hashes_package_not_found",
"tests/test_cli.py::test_get_package_hashes_unknown_algorithm",
"tests/test_cli.py::test_get_package_hashes_without_version",
"tests/test_cli.py::test_with_extras_syntax",
"tests/test_cli.py::test_extras_syntax_edit",
"tests/test_cli.py::test_add_extra_extras_syntax_edit",
"tests/test_cli.py::test_change_extra_extras_syntax_edit",
"tests/test_cli.py::test_remove_extra_extras_syntax_edit",
"tests/test_cli.py::test_interactive_upgrade_request",
"tests/test_cli.py::test_interactive_upgrade_request_repeat_question",
"tests/test_cli.py::test_interactive_upgrade_request_help",
"tests/test_cli.py::test_interactive_upgrade_request_force_yes"
] | [] | MIT License | 5,325 | 719 | [
"hashin.py"
] |
podhmo__swagger-marshmallow-codegen-47 | 1b26fda7171cd965b58120dec19cce2005941334 | 2019-09-04 07:09:21 | 8ab62981cdce03a4b94fb9fbab4f2403e40758ee | diff --git a/examples/02default/definition.py b/examples/02default/definition.py
index 6b59f5c..97f70a4 100644
--- a/examples/02default/definition.py
+++ b/examples/02default/definition.py
@@ -43,12 +43,12 @@ class Length_validation(Schema):
class Maximum_validation(Schema):
- n0 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=False)])
- n1 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=True)])
- n2 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=False)])
- m0 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=False, exclusive_max=False)])
- m1 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=True, exclusive_max=False)])
- m2 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=False, exclusive_max=False)])
+ n0 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=True)])
+ n1 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=False)])
+ n2 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=True)])
+ m0 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=True, max_inclusive=True)])
+ m1 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=False, max_inclusive=True)])
+ m2 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=True, max_inclusive=True)])
class Regex_validation(Schema):
@@ -57,7 +57,7 @@ class Regex_validation(Schema):
class Array_validation(Schema):
- nums = fields.List(fields.Integer(), validate=[ItemsRange(min=1, max=10), Unique()])
+ nums = fields.List(fields.Integer(), validate=[ItemsRange(min=1, max=10, min_inclusive=True, max_inclusive=True), Unique()])
class Enum_validation(Schema):
diff --git a/examples/05uber/main.py b/examples/05uber/main.py
index c611131..f4ce4db 100644
--- a/examples/05uber/main.py
+++ b/examples/05uber/main.py
@@ -6,12 +6,12 @@ if __name__ == "__main__":
try:
d = {
"products": [
- dict(product_id="x1", description="first", display_name="1st", capacity="4 people"),
+ dict(product_id="x1", description="first", display_name="1st", capacity="4"),
dict(
product_id="x2",
description="second",
display_name="2nd",
- capacity="5 people",
+ capacity="5",
image="http://example.jp/img/notfound.jpg"
),
]
diff --git a/examples/05uber/uberschema.py b/examples/05uber/uberschema.py
index 124f594..05b06c9 100644
--- a/examples/05uber/uberschema.py
+++ b/examples/05uber/uberschema.py
@@ -15,7 +15,7 @@ class Product(Schema):
class ProductList(Schema):
- products = fields.List(fields.Field('Product'), description='Contains the list of products')
+ products = fields.List(fields.Nested('Product'), description='Contains the list of products')
class PriceEstimate(Schema):
@@ -44,7 +44,7 @@ class Activities(Schema):
offset = fields.Integer(description='Position in pagination.')
limit = fields.Integer(description='Number of items to retrieve (100 max).')
count = fields.Integer(description='Total number of items available.')
- history = fields.List(fields.Field('Activity'))
+ history = fields.List(fields.Nested('Activity'))
class Error(Schema):
diff --git a/examples/07custom/myschema.py b/examples/07custom/myschema.py
index 5ee30ad..76de5a9 100644
--- a/examples/07custom/myschema.py
+++ b/examples/07custom/myschema.py
@@ -24,10 +24,10 @@ class ObjectId(fields.String):
except (ValueError, AttributeError):
self.fail('invalid_object_id')
- def _deserialize(self, value, attr, data):
+ def _deserialize(self, value, attr, data, **kwargs):
return self._validated(value)
- def _serialize(self, value, attr, data):
+ def _serialize(self, value, attr, data, **kwargs):
if not value:
return value
return str(value)
diff --git a/swagger_marshmallow_codegen/resolver.py b/swagger_marshmallow_codegen/resolver.py
index 8c54a97..5606561 100644
--- a/swagger_marshmallow_codegen/resolver.py
+++ b/swagger_marshmallow_codegen/resolver.py
@@ -130,9 +130,9 @@ class Resolver:
if "minimum" in field or "maximum" in field:
range_opts = {
"min": field.get("minimum"),
- "exclusive_min": field.get("exclusiveMinimum", False),
+ "min_inclusive": not field.get("exclusiveMinimum", False),
"max": field.get("maximum"),
- "exclusive_max": field.get("exclusiveMaximum", False),
+ "max_inclusive": not field.get("exclusiveMaximum", False),
}
add(validate.Range(**range_opts))
if "minLength" in field or "maxLength" in field:
diff --git a/swagger_marshmallow_codegen/schema/extra.py b/swagger_marshmallow_codegen/schema/extra.py
index 2b91d56..88939dc 100644
--- a/swagger_marshmallow_codegen/schema/extra.py
+++ b/swagger_marshmallow_codegen/schema/extra.py
@@ -68,19 +68,20 @@ def make_additional_properties_schema_class(Schema):
OPTIONS_CLASS = AdditionalPropertiesOpts
@pre_load
- @pre_dump
- def wrap_dynamic_additionals(self, data):
- diff = set(data.keys()).difference(self.fields.keys())
+ def wrap_load_dynamic_additionals(self, data, *, many=False, partial=False):
+ diff = set(data.keys()).difference(self.load_fields.keys())
for name in diff:
f = self.opts.additional_field
- self.fields[name] = f() if callable(f) else f
+ self.load_fields[name] = f() if callable(f) else f
return data
- def dumps(self, obj, many=None, *args, **kwargs):
- return super().dumps(obj, many=many, *args, **kwargs)
-
- def dump(self, obj, many=None, *args, **kwargs):
- return super().dump(obj, many=many, *args, **kwargs)
+ @pre_dump
+ def wrap_dump_dynamic_additionals(self, data, *, many=False, partial=False):
+ diff = set(data.keys()).difference(self.dump_fields.keys())
+ for name in diff:
+ f = self.opts.additional_field
+ self.dump_fields[name] = f() if callable(f) else f
+ return data
return AdditionalPropertiesSchema
diff --git a/swagger_marshmallow_codegen/validate.py b/swagger_marshmallow_codegen/validate.py
index 533e277..82f7d6a 100644
--- a/swagger_marshmallow_codegen/validate.py
+++ b/swagger_marshmallow_codegen/validate.py
@@ -1,51 +1,19 @@
from marshmallow import validate as v
-from marshmallow.validate import ( # NOQA
- Length,
- Regexp,
- OneOf,
-)
+from marshmallow.validate import Length, Regexp, OneOf # NOQA
-class Range(v.Range):
- def __init__(self, min=None, max=None, error=None, exclusive_max=False, exclusive_min=False):
- super().__init__(min=min, max=max, error=error)
- self.exclusive_max = exclusive_max
- self.exclusive_min = exclusive_min
-
- def _repr_args(self):
- return 'min={0!r}, max={1!r}, exclusive_min={2!r}, exclusive_max={3!r}'.format(self.min, self.max, self.exclusive_min, self.exclusive_max)
-
- def __call__(self, value):
- if self.min is not None:
- if self.exclusive_min:
- over = value <= self.min
- else:
- over = value < self.min
- if over:
- message = self.message_min if self.max is None else self.message_all
- raise v.ValidationError(self._format_error(value, message))
-
- if self.max is not None:
- if self.exclusive_max:
- over = value >= self.max
- else:
- over = value > self.max
- if over:
- message = self.message_max if self.min is None else self.message_all
- raise v.ValidationError(self._format_error(value, message))
-
- return value
+Range = v.Range
class MultipleOf(v.Validator):
- message = '{input} is Not divisible by {n}'
+ message = "{input} is Not divisible by {n}"
def __init__(self, n=1, error=None):
self.n = n
self.error = error
def _repr_args(self):
- return 'n={0!r}'.format(self.n)
+ return "n={0!r}".format(self.n)
def _format_error(self, value):
return self.message.format(input=value, n=self.n)
@@ -59,32 +27,24 @@ class MultipleOf(v.Validator):
class ItemsRange(v.Range):
"""for maxItems and minItems"""
- message_min = 'Must be at least {min} items.'
- message_max = 'Must be at most {max} items.'
- message_all = 'Must be between {min} and {max} items.'
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
+ message_min = "Must be {min_op} {{min}} items."
+ message_max = "Must be {max_op} {{max}} items."
+ message_all = "Must be {min_op} {{min}} and {max_op} {{max}} items."
- def __call__(self, value):
- if self.min is not None and len(value) < self.min:
- message = self.message_min if self.max is None else self.message_all
- raise v.ValidationError(self._format_error(value, message))
-
- if self.max is not None and len(value) > self.max:
- message = self.message_max if self.min is None else self.message_all
- raise v.ValidationError(self._format_error(value, message))
- return value
+ message_gte = "greater than or equal to"
+ message_gt = "greater than"
+ message_lte = "less than or equal to"
+ message_lt = "less than"
class Unique(v.Validator):
- message = '{input} is Not unique'
+ message = "{input} is Not unique"
def __init__(self, error=None):
self.error = error
def _repr_args(self):
- return ''
+ return ""
def _format_error(self, value):
return self.message.format(input=value)
| some tests are broken | podhmo/swagger-marshmallow-codegen | diff --git a/swagger_marshmallow_codegen/tests/dst/00allOf.py b/swagger_marshmallow_codegen/tests/dst/00allOf.py
index b641802..7d0bdf8 100644
--- a/swagger_marshmallow_codegen/tests/dst/00allOf.py
+++ b/swagger_marshmallow_codegen/tests/dst/00allOf.py
@@ -18,4 +18,4 @@ class Cat(Pet):
class Dog(Pet):
"""A representation of a dog"""
- packSize = fields.Integer(required=True, description='the size of the pack the dog is from', validate=[Range(min=0, max=None, exclusive_min=False, exclusive_max=False)])
+ packSize = fields.Integer(required=True, description='the size of the pack the dog is from', validate=[Range(min=0, max=None, min_inclusive=True, max_inclusive=True)])
diff --git a/swagger_marshmallow_codegen/tests/dst/00items.py b/swagger_marshmallow_codegen/tests/dst/00items.py
index 9a8696d..3aa7e6a 100644
--- a/swagger_marshmallow_codegen/tests/dst/00items.py
+++ b/swagger_marshmallow_codegen/tests/dst/00items.py
@@ -9,4 +9,4 @@ from swagger_marshmallow_codegen.validate import (
class A(Schema):
- nums = fields.List(fields.Integer(), validate=[ItemsRange(min=1, max=10), Unique()])
+ nums = fields.List(fields.Integer(), validate=[ItemsRange(min=1, max=10, min_inclusive=True, max_inclusive=True), Unique()])
diff --git a/swagger_marshmallow_codegen/tests/dst/00maximum.py b/swagger_marshmallow_codegen/tests/dst/00maximum.py
index 08c690d..37ed6ff 100644
--- a/swagger_marshmallow_codegen/tests/dst/00maximum.py
+++ b/swagger_marshmallow_codegen/tests/dst/00maximum.py
@@ -8,9 +8,9 @@ from swagger_marshmallow_codegen.validate import (
class X(Schema):
- n0 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=False)])
- n1 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=True)])
- n2 = fields.Number(validate=[Range(min=None, max=100, exclusive_min=False, exclusive_max=False)])
- m0 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=False, exclusive_max=False)])
- m1 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=True, exclusive_max=False)])
- m2 = fields.Number(validate=[Range(min=100, max=None, exclusive_min=False, exclusive_max=False)])
+ n0 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=True)])
+ n1 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=False)])
+ n2 = fields.Number(validate=[Range(min=None, max=100, min_inclusive=True, max_inclusive=True)])
+ m0 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=True, max_inclusive=True)])
+ m1 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=False, max_inclusive=True)])
+ m2 = fields.Number(validate=[Range(min=100, max=None, min_inclusive=True, max_inclusive=True)])
diff --git a/swagger_marshmallow_codegen/tests/dst/00paths.py b/swagger_marshmallow_codegen/tests/dst/00paths.py
index 7177bac..203028d 100644
--- a/swagger_marshmallow_codegen/tests/dst/00paths.py
+++ b/swagger_marshmallow_codegen/tests/dst/00paths.py
@@ -26,7 +26,7 @@ class PetsInput:
class Query(Schema):
animal_type = fields.String(validate=[Regexp(regex=re.compile('^[a-zA-Z0-9]*$'))])
- limit = fields.Integer(missing=lambda: 100, validate=[Range(min=0, max=None, exclusive_min=False, exclusive_max=False)])
+ limit = fields.Integer(missing=lambda: 100, validate=[Range(min=0, max=None, min_inclusive=True, max_inclusive=True)])
diff --git a/swagger_marshmallow_codegen/tests/test_schema.py b/swagger_marshmallow_codegen/tests/test_schema.py
index ba9d8a5..4c47724 100644
--- a/swagger_marshmallow_codegen/tests/test_schema.py
+++ b/swagger_marshmallow_codegen/tests/test_schema.py
@@ -6,17 +6,27 @@ from marshmallow import fields, Schema
class PrimitiveValueSchemaTests(unittest.TestCase):
def _getTargetClass(self):
from swagger_marshmallow_codegen.schema import PrimitiveValueSchema
+
return PrimitiveValueSchema
- def assert_value(self, fut, c):
+ def assert_load_value(self, fut, c):
if c.ok:
actual = fut()
self.assertEqual(actual, c.expected)
else:
from marshmallow import ValidationError
+
with self.assertRaises(ValidationError):
fut()
+ def assert_dump_value(self, fut, c):
+ if c.ok:
+ actual = fut()
+ self.assertEqual(actual, c.expected)
+ else:
+ with self.assertRaises(ValueError):
+ fut()
+
def test_load_atom(self):
class S(self._getTargetClass()):
class schema_class(Schema):
@@ -29,7 +39,7 @@ class PrimitiveValueSchemaTests(unittest.TestCase):
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
- self.assert_value(lambda: S().load(c.value), c)
+ self.assert_load_value(lambda: S().load(c.value), c)
def test_load_list(self):
class S(self._getTargetClass()):
@@ -44,7 +54,7 @@ class PrimitiveValueSchemaTests(unittest.TestCase):
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
- self.assert_value(lambda: S().load(c.value), c)
+ self.assert_load_value(lambda: S().load(c.value), c)
def test_dump_atom(self):
class S(self._getTargetClass()):
@@ -59,7 +69,7 @@ class PrimitiveValueSchemaTests(unittest.TestCase):
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
- self.assert_value(lambda: S().dump(c.value), c)
+ self.assert_dump_value(lambda: S().dump(c.value), c)
def test_dump_list(self):
class S(self._getTargetClass()):
@@ -74,12 +84,13 @@ class PrimitiveValueSchemaTests(unittest.TestCase):
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
- self.assert_value(lambda: S().dump(c.value), c)
+ self.assert_dump_value(lambda: S().dump(c.value), c)
-class AddtioinalSchemaTests(unittest.TestCase):
+class AdditioinalSchemaTests(unittest.TestCase):
def _getTargetClass(self):
from swagger_marshmallow_codegen.schema import AdditionalPropertiesSchema
+
return AdditionalPropertiesSchema
def assert_value(self, fut, c):
@@ -88,6 +99,7 @@ class AddtioinalSchemaTests(unittest.TestCase):
self.assertEqual(actual, c.expected)
else:
from marshmallow import ValidationError
+
with self.assertRaises(ValidationError):
fut()
@@ -98,12 +110,10 @@ class AddtioinalSchemaTests(unittest.TestCase):
C = namedtuple("C", "value, expected, ok")
candidates = [
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": "100"},
- ok=True
- ),
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": "100"},
+ ok=True,
+ )
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
@@ -119,18 +129,14 @@ class AddtioinalSchemaTests(unittest.TestCase):
C = namedtuple("C", "value, expected, ok")
candidates = [
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": 100},
- ok=True
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": 100},
+ ok=True,
),
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": 100},
- ok=True
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": 100},
+ ok=True,
),
]
for c in candidates:
@@ -144,12 +150,10 @@ class AddtioinalSchemaTests(unittest.TestCase):
C = namedtuple("C", "value, expected, ok")
candidates = [
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": "100"},
- ok=True
- ),
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": "100"},
+ ok=True,
+ )
]
for c in candidates:
with self.subTest(value=c.value, ok=c.ok):
@@ -165,18 +169,14 @@ class AddtioinalSchemaTests(unittest.TestCase):
C = namedtuple("C", "value, expected, ok")
candidates = [
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": 100},
- ok=True
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": 100},
+ ok=True,
),
C(
- value={"name": "foo",
- "value": "100"},
- expected={"name": "foo",
- "value": 100},
- ok=True
+ value={"name": "foo", "value": "100"},
+ expected={"name": "foo", "value": 100},
+ ok=True,
),
]
for c in candidates:
diff --git a/swagger_marshmallow_codegen/tests/test_validate.py b/swagger_marshmallow_codegen/tests/test_validate.py
index 4e8f0b5..e1aade5 100644
--- a/swagger_marshmallow_codegen/tests/test_validate.py
+++ b/swagger_marshmallow_codegen/tests/test_validate.py
@@ -5,6 +5,7 @@ from collections import namedtuple
class RangeTests(unittest.TestCase):
def _makeOne(self, *args, **kwargs):
from swagger_marshmallow_codegen.validate import Range
+
return Range(*args, **kwargs)
def test_maximum(self):
@@ -23,7 +24,9 @@ class RangeTests(unittest.TestCase):
with self.subTest(
maximum=c.maximum, exclusive_maximum=c.exclusive_maximum, value=c.value
):
- target = self._makeOne(max=c.maximum, exclusive_max=c.exclusive_maximum)
+ target = self._makeOne(
+ max=c.maximum, max_inclusive=not c.exclusive_maximum
+ )
try:
target(c.value)
self.assertTrue(c.ok)
@@ -46,7 +49,9 @@ class RangeTests(unittest.TestCase):
with self.subTest(
minimum=c.minimum, exclusive_minimum=c.exclusive_minimum, value=c.value
):
- target = self._makeOne(min=c.minimum, exclusive_min=c.exclusive_minimum)
+ target = self._makeOne(
+ min=c.minimum, min_inclusive=not c.exclusive_minimum
+ )
try:
target(c.value)
self.assertTrue(c.ok)
@@ -57,6 +62,7 @@ class RangeTests(unittest.TestCase):
class MultipleOfTests(unittest.TestCase):
def _makeOne(self, *args, **kwargs):
from swagger_marshmallow_codegen.validate import MultipleOf
+
return MultipleOf(*args, **kwargs)
def test_it(self):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 2
},
"num_modified_files": 7
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[testing]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
dictknife==0.14.1
exceptiongroup==1.2.2
importlib-metadata==6.7.0
iniconfig==2.0.0
magicalimport==0.9.1
marshmallow==3.19.0
packaging==24.0
pluggy==1.2.0
prestring==0.9.0
pytest==7.4.4
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
-e git+https://github.com/podhmo/swagger-marshmallow-codegen.git@1b26fda7171cd965b58120dec19cce2005941334#egg=swagger_marshmallow_codegen
tomli==2.0.1
tomlkit==0.12.5
typing_extensions==4.7.1
zipp==3.15.0
| name: swagger-marshmallow-codegen
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- dictknife==0.14.1
- exceptiongroup==1.2.2
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- magicalimport==0.9.1
- marshmallow==3.19.0
- packaging==24.0
- pluggy==1.2.0
- prestring==0.9.0
- pytest==7.4.4
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- tomli==2.0.1
- tomlkit==0.12.5
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/swagger-marshmallow-codegen
| [
"swagger_marshmallow_codegen/tests/test_schema.py::PrimitiveValueSchemaTests::test_dump_atom",
"swagger_marshmallow_codegen/tests/test_schema.py::PrimitiveValueSchemaTests::test_dump_list",
"swagger_marshmallow_codegen/tests/test_schema.py::PrimitiveValueSchemaTests::test_load_atom",
"swagger_marshmallow_codegen/tests/test_schema.py::PrimitiveValueSchemaTests::test_load_list",
"swagger_marshmallow_codegen/tests/test_schema.py::AdditioinalSchemaTests::test_dump_default",
"swagger_marshmallow_codegen/tests/test_schema.py::AdditioinalSchemaTests::test_dump_specific",
"swagger_marshmallow_codegen/tests/test_schema.py::AdditioinalSchemaTests::test_load_default",
"swagger_marshmallow_codegen/tests/test_schema.py::AdditioinalSchemaTests::test_load_specific",
"swagger_marshmallow_codegen/tests/test_validate.py::RangeTests::test_maximum",
"swagger_marshmallow_codegen/tests/test_validate.py::RangeTests::test_minimum",
"swagger_marshmallow_codegen/tests/test_validate.py::MultipleOfTests::test_it"
] | [] | [] | [] | MIT License | 5,327 | 2,683 | [
"examples/02default/definition.py",
"examples/05uber/main.py",
"examples/05uber/uberschema.py",
"examples/07custom/myschema.py",
"swagger_marshmallow_codegen/resolver.py",
"swagger_marshmallow_codegen/schema/extra.py",
"swagger_marshmallow_codegen/validate.py"
] |
|
pytorch__ignite-606 | 024667f9265610e718a6a974e7c4f6b532ea6fd3 | 2019-09-04 10:12:05 | 8c8c3c2e9a007673dca1db68e27f0bdff2edf643 | Bibonaut: @vfdev-5 What is the best way to create tests with optional tag but without full duplication of affected existing tests? Do you have a good idea? Otherwise I would just clone them and add optional tags.
vfdev-5: @Bibonaut Visdom logger reimplements `BaseWeightsScalarHandler`, could you please add `tag` to its `WeightsScalarHandler` and `GradsScalarHandler`.
About tests factorization, maybe a solution like that could work :
```
def test_something():
def _test(tag):
# test with/without tag
_test(tag=None)
_test(tag="some tag")
```
Bibonaut: @vfdev-5 I've added tags to Visdom logger and added tests for Tensorboard logger and Visdom logger. Is there anything to do?
vfdev-5: @Bibonaut I think it is good for the issue. Thanks for the PR ! | diff --git a/ignite/contrib/handlers/base_logger.py b/ignite/contrib/handlers/base_logger.py
index f965aaf9..de5c4c37 100644
--- a/ignite/contrib/handlers/base_logger.py
+++ b/ignite/contrib/handlers/base_logger.py
@@ -130,7 +130,7 @@ class BaseWeightsScalarHandler(BaseHandler):
Helper handler to log model's weights as scalars.
"""
- def __init__(self, model, reduction=torch.norm):
+ def __init__(self, model, reduction=torch.norm, tag=None):
if not isinstance(model, torch.nn.Module):
raise TypeError("Argument model should be of type torch.nn.Module, "
"but given {}".format(type(model)))
@@ -149,6 +149,7 @@ class BaseWeightsScalarHandler(BaseHandler):
self.model = model
self.reduction = reduction
+ self.tag = tag
class BaseWeightsHistHandler(BaseHandler):
@@ -156,9 +157,10 @@ class BaseWeightsHistHandler(BaseHandler):
Helper handler to log model's weights as histograms.
"""
- def __init__(self, model):
+ def __init__(self, model, tag=None):
if not isinstance(model, torch.nn.Module):
raise TypeError("Argument model should be of type torch.nn.Module, "
"but given {}".format(type(model)))
self.model = model
+ self.tag = tag
diff --git a/ignite/contrib/handlers/tensorboard_logger.py b/ignite/contrib/handlers/tensorboard_logger.py
index c5bf910f..b1cb76f0 100644
--- a/ignite/contrib/handlers/tensorboard_logger.py
+++ b/ignite/contrib/handlers/tensorboard_logger.py
@@ -168,10 +168,11 @@ class WeightsScalarHandler(BaseWeightsScalarHandler):
Args:
model (torch.nn.Module): model to log weights
reduction (callable): function to reduce parameters into scalar
+ tag (str, optional): common title for all produced plots. For example, 'generator'
"""
- def __init__(self, model, reduction=torch.norm):
- super(WeightsScalarHandler, self).__init__(model, reduction)
+ def __init__(self, model, reduction=torch.norm, tag=None):
+ super(WeightsScalarHandler, self).__init__(model, reduction, tag=tag)
def __call__(self, engine, logger, event_name):
@@ -179,12 +180,13 @@ class WeightsScalarHandler(BaseWeightsScalarHandler):
raise RuntimeError("Handler 'WeightsScalarHandler' works only with TensorboardLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
if p.grad is None:
continue
name = name.replace('.', '/')
- logger.writer.add_scalar("weights_{}/{}".format(self.reduction.__name__, name),
+ logger.writer.add_scalar("{}weights_{}/{}".format(tag_prefix, self.reduction.__name__, name),
self.reduction(p.data),
global_step)
@@ -208,23 +210,25 @@ class WeightsHistHandler(BaseWeightsHistHandler):
Args:
model (torch.nn.Module): model to log weights
+ tag (str, optional): common title for all produced plots. For example, 'generator'
"""
- def __init__(self, model):
- super(WeightsHistHandler, self).__init__(model)
+ def __init__(self, model, tag=None):
+ super(WeightsHistHandler, self).__init__(model, tag=tag)
def __call__(self, engine, logger, event_name):
if not isinstance(logger, TensorboardLogger):
raise RuntimeError("Handler 'WeightsHistHandler' works only with TensorboardLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
if p.grad is None:
continue
name = name.replace('.', '/')
- logger.writer.add_histogram(tag="weights/{}".format(name),
+ logger.writer.add_histogram(tag="{}weights/{}".format(tag_prefix, name),
values=p.data.detach().cpu().numpy(),
global_step=global_step)
@@ -251,22 +255,24 @@ class GradsScalarHandler(BaseWeightsScalarHandler):
Args:
model (torch.nn.Module): model to log weights
reduction (callable): function to reduce parameters into scalar
+ tag (str, optional): common title for all produced plots. For example, 'generator'
"""
- def __init__(self, model, reduction=torch.norm):
- super(GradsScalarHandler, self).__init__(model, reduction)
+ def __init__(self, model, reduction=torch.norm, tag=None):
+ super(GradsScalarHandler, self).__init__(model, reduction, tag=tag)
def __call__(self, engine, logger, event_name):
if not isinstance(logger, TensorboardLogger):
raise RuntimeError("Handler 'GradsScalarHandler' works only with TensorboardLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
if p.grad is None:
continue
name = name.replace('.', '/')
- logger.writer.add_scalar("grads_{}/{}".format(self.reduction.__name__, name),
+ logger.writer.add_scalar("{}grads_{}/{}".format(tag_prefix, self.reduction.__name__, name),
self.reduction(p.grad),
global_step)
@@ -290,22 +296,24 @@ class GradsHistHandler(BaseWeightsHistHandler):
Args:
model (torch.nn.Module): model to log weights
+ tag (str, optional): common title for all produced plots. For example, 'generator'
"""
- def __init__(self, model):
- super(GradsHistHandler, self).__init__(model)
+ def __init__(self, model, tag=None):
+ super(GradsHistHandler, self).__init__(model, tag=tag)
def __call__(self, engine, logger, event_name):
if not isinstance(logger, TensorboardLogger):
raise RuntimeError("Handler 'GradsHistHandler' works only with TensorboardLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
if p.grad is None:
continue
name = name.replace('.', '/')
- logger.writer.add_histogram(tag="grads/{}".format(name),
+ logger.writer.add_histogram(tag="{}grads/{}".format(tag_prefix, name),
values=p.grad.detach().cpu().numpy(),
global_step=global_step)
diff --git a/ignite/contrib/handlers/visdom_logger.py b/ignite/contrib/handlers/visdom_logger.py
index 9561432a..88a2b51e 100644
--- a/ignite/contrib/handlers/visdom_logger.py
+++ b/ignite/contrib/handlers/visdom_logger.py
@@ -234,11 +234,12 @@ class WeightsScalarHandler(BaseWeightsScalarHandler, _BaseVisDrawer):
Args:
model (torch.nn.Module): model to log weights
reduction (callable): function to reduce parameters into scalar
+ tag (str, optional): common title for all produced plots. For example, 'generator'
show_legend (bool, optional): flag to show legend in the window
"""
- def __init__(self, model, reduction=torch.norm, show_legend=False):
- super(WeightsScalarHandler, self).__init__(model, reduction)
+ def __init__(self, model, reduction=torch.norm, tag=None, show_legend=False):
+ super(WeightsScalarHandler, self).__init__(model, reduction, tag=tag)
_BaseVisDrawer.__init__(self, show_legend=show_legend)
def __call__(self, engine, logger, event_name):
@@ -247,9 +248,10 @@ class WeightsScalarHandler(BaseWeightsScalarHandler, _BaseVisDrawer):
raise RuntimeError("Handler 'WeightsScalarHandler' works only with VisdomLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
name = name.replace('.', '/')
- k = "weights_{}/{}".format(self.reduction.__name__, name)
+ k = "{}weights_{}/{}".format(tag_prefix, self.reduction.__name__, name)
v = float(self.reduction(p.data))
self.add_scalar(logger, k, v, event_name, global_step)
@@ -278,12 +280,13 @@ class GradsScalarHandler(BaseWeightsScalarHandler, _BaseVisDrawer):
Args:
model (torch.nn.Module): model to log weights
reduction (callable): function to reduce parameters into scalar
+ tag (str, optional): common title for all produced plots. For example, 'generator'
show_legend (bool, optional): flag to show legend in the window
"""
- def __init__(self, model, reduction=torch.norm, show_legend=False):
- super(GradsScalarHandler, self).__init__(model, reduction)
+ def __init__(self, model, reduction=torch.norm, tag=None, show_legend=False):
+ super(GradsScalarHandler, self).__init__(model, reduction, tag)
_BaseVisDrawer.__init__(self, show_legend=show_legend)
def __call__(self, engine, logger, event_name):
@@ -291,9 +294,10 @@ class GradsScalarHandler(BaseWeightsScalarHandler, _BaseVisDrawer):
raise RuntimeError("Handler 'GradsScalarHandler' works only with VisdomLogger")
global_step = engine.state.get_event_attrib_value(event_name)
+ tag_prefix = "{}/".format(self.tag) if self.tag else ""
for name, p in self.model.named_parameters():
name = name.replace('.', '/')
- k = "grads_{}/{}".format(self.reduction.__name__, name)
+ k = "{}grads_{}/{}".format(tag_prefix, self.reduction.__name__, name)
v = float(self.reduction(p.grad))
self.add_scalar(logger, k, v, event_name, global_step)
| TensorboardLogger - tag/prefix for weights, grads and optimizer logging
### Problem
I have to log multiple models/optimizers with TensorboardLogger and I want them to be displayed in tensorboard in separate *subpanels* with distinct names/tags.
At the moment, it is not possible to provide a custom tag or prefix for some handlers that log model parameters, because the tag is deduced from `model.named_parameters()`.
As a consequence, parameters of multiple models are plottet to one graph.
### Solution
The affected handlers allow passing a custom prefix/tag to the function that is prepended to the deduced tag.
E.g. for `GradsHistHandler` (snippet):
```python
logger.writer.add_histogram(tag="grads/{}".format(name),
values=p.grad.detach().cpu().numpy(),
global_step=global_step)
```
It can be changed to
```python
logger.writer.add_histogram(tag=self.prefix + "grads/{}".format(name),
values=p.grad.detach().cpu().numpy(),
global_step=global_step)
```
Affected handlers in `ignite.contrib.handlers.tensorboard_logger`:
- ~~`OptimizerParamsHandler`~~
- `WeightsScalarHandler`
- `WeightsHistHandler`
- `GradsScalarHandler`
- `GradsHistHandler`
What do you think?
If wanted, I can work on this.
| pytorch/ignite | diff --git a/tests/ignite/contrib/handlers/test_tensorboard_logger.py b/tests/ignite/contrib/handlers/test_tensorboard_logger.py
index e38e8089..70e2ba77 100644
--- a/tests/ignite/contrib/handlers/test_tensorboard_logger.py
+++ b/tests/ignite/contrib/handlers/test_tensorboard_logger.py
@@ -230,23 +230,30 @@ def test_weights_scalar_handler(dummy_model_factory):
model = dummy_model_factory(with_grads=True, with_frozen_layer=False)
- wrapper = WeightsScalarHandler(model)
- mock_logger = MagicMock(spec=TensorboardLogger)
- mock_logger.writer = MagicMock()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = WeightsScalarHandler(model, tag=tag)
+ mock_logger = MagicMock(spec=TensorboardLogger)
+ mock_logger.writer = MagicMock()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- assert mock_logger.writer.add_scalar.call_count == 4
- mock_logger.writer.add_scalar.assert_has_calls([
- call("weights_norm/fc1/weight", 0.0, 5),
- call("weights_norm/fc1/bias", 0.0, 5),
- call("weights_norm/fc2/weight", 12.0, 5),
- call("weights_norm/fc2/bias", math.sqrt(12.0), 5),
- ], any_order=True)
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ assert mock_logger.writer.add_scalar.call_count == 4
+ mock_logger.writer.add_scalar.assert_has_calls([
+ call(tag_prefix + "weights_norm/fc1/weight", 0.0, 5),
+ call(tag_prefix + "weights_norm/fc1/bias", 0.0, 5),
+ call(tag_prefix + "weights_norm/fc2/weight", 12.0, 5),
+ call(tag_prefix + "weights_norm/fc2/bias", math.sqrt(12.0), 5),
+ ], any_order=True)
+
+ _test()
+ _test(tag="tag")
def test_weights_scalar_handler_frozen_layers(dummy_model_factory):
@@ -294,23 +301,30 @@ def test_weights_hist_handler(dummy_model_factory):
model = dummy_model_factory(with_grads=True, with_frozen_layer=False)
- wrapper = WeightsHistHandler(model)
- mock_logger = MagicMock(spec=TensorboardLogger)
- mock_logger.writer = MagicMock()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = WeightsHistHandler(model, tag=tag)
+ mock_logger = MagicMock(spec=TensorboardLogger)
+ mock_logger.writer = MagicMock()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- assert mock_logger.writer.add_histogram.call_count == 4
- mock_logger.writer.add_histogram.assert_has_calls([
- call(tag="weights/fc1/weight", values=ANY, global_step=5),
- call(tag="weights/fc1/bias", values=ANY, global_step=5),
- call(tag="weights/fc2/weight", values=ANY, global_step=5),
- call(tag="weights/fc2/bias", values=ANY, global_step=5),
- ], any_order=True)
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ assert mock_logger.writer.add_histogram.call_count == 4
+ mock_logger.writer.add_histogram.assert_has_calls([
+ call(tag=tag_prefix + "weights/fc1/weight", values=ANY, global_step=5),
+ call(tag=tag_prefix + "weights/fc1/bias", values=ANY, global_step=5),
+ call(tag=tag_prefix + "weights/fc2/weight", values=ANY, global_step=5),
+ call(tag=tag_prefix + "weights/fc2/bias", values=ANY, global_step=5),
+ ], any_order=True)
+
+ _test()
+ _test(tag="tag")
def test_weights_hist_handler_frozen_layers(dummy_model_factory):
@@ -359,25 +373,32 @@ def test_grads_scalar_handler_wrong_setup():
def test_grads_scalar_handler(dummy_model_factory, norm_mock):
model = dummy_model_factory(with_grads=True, with_frozen_layer=False)
- wrapper = GradsScalarHandler(model, reduction=norm_mock)
- mock_logger = MagicMock(spec=TensorboardLogger)
- mock_logger.writer = MagicMock()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = GradsScalarHandler(model, reduction=norm_mock, tag=tag)
+ mock_logger = MagicMock(spec=TensorboardLogger)
+ mock_logger.writer = MagicMock()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
- norm_mock.reset_mock()
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
+ norm_mock.reset_mock()
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- mock_logger.writer.add_scalar.assert_has_calls([
- call("grads_norm/fc1/weight", ANY, 5),
- call("grads_norm/fc1/bias", ANY, 5),
- call("grads_norm/fc2/weight", ANY, 5),
- call("grads_norm/fc2/bias", ANY, 5),
- ], any_order=True)
- assert mock_logger.writer.add_scalar.call_count == 4
- assert norm_mock.call_count == 4
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ mock_logger.writer.add_scalar.assert_has_calls([
+ call(tag_prefix + "grads_norm/fc1/weight", ANY, 5),
+ call(tag_prefix + "grads_norm/fc1/bias", ANY, 5),
+ call(tag_prefix + "grads_norm/fc2/weight", ANY, 5),
+ call(tag_prefix + "grads_norm/fc2/bias", ANY, 5),
+ ], any_order=True)
+ assert mock_logger.writer.add_scalar.call_count == 4
+ assert norm_mock.call_count == 4
+
+ _test()
+ _test(tag="tag")
def test_grads_scalar_handler_frozen_layers(dummy_model_factory, norm_mock):
@@ -424,23 +445,30 @@ def test_grads_hist_handler_wrong_setup():
def test_grads_hist_handler(dummy_model_factory):
model = dummy_model_factory(with_grads=True, with_frozen_layer=False)
- wrapper = GradsHistHandler(model)
- mock_logger = MagicMock(spec=TensorboardLogger)
- mock_logger.writer = MagicMock()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = GradsHistHandler(model, tag=tag)
+ mock_logger = MagicMock(spec=TensorboardLogger)
+ mock_logger.writer = MagicMock()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- assert mock_logger.writer.add_histogram.call_count == 4
- mock_logger.writer.add_histogram.assert_has_calls([
- call(tag="grads/fc1/weight", values=ANY, global_step=5),
- call(tag="grads/fc1/bias", values=ANY, global_step=5),
- call(tag="grads/fc2/weight", values=ANY, global_step=5),
- call(tag="grads/fc2/bias", values=ANY, global_step=5),
- ], any_order=True)
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ assert mock_logger.writer.add_histogram.call_count == 4
+ mock_logger.writer.add_histogram.assert_has_calls([
+ call(tag=tag_prefix + "grads/fc1/weight", values=ANY, global_step=5),
+ call(tag=tag_prefix + "grads/fc1/bias", values=ANY, global_step=5),
+ call(tag=tag_prefix + "grads/fc2/weight", values=ANY, global_step=5),
+ call(tag=tag_prefix + "grads/fc2/bias", values=ANY, global_step=5),
+ ], any_order=True)
+
+ _test()
+ _test(tag="tag")
def test_grads_hist_frozen_layers(dummy_model_factory):
diff --git a/tests/ignite/contrib/handlers/test_visdom_logger.py b/tests/ignite/contrib/handlers/test_visdom_logger.py
index b26b6a3a..afa7f54b 100644
--- a/tests/ignite/contrib/handlers/test_visdom_logger.py
+++ b/tests/ignite/contrib/handlers/test_visdom_logger.py
@@ -390,34 +390,44 @@ def test_weights_scalar_handler():
model = DummyModel()
- wrapper = WeightsScalarHandler(model)
- mock_logger = MagicMock(spec=VisdomLogger)
- mock_logger.vis = MagicMock()
- mock_logger.executor = _DummyExecutor()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = WeightsScalarHandler(model, tag=tag)
+ mock_logger = MagicMock(spec=VisdomLogger)
+ mock_logger.vis = MagicMock()
+ mock_logger.executor = _DummyExecutor()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
-
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
- assert mock_logger.vis.line.call_count == 4
- mock_logger.vis.line.assert_has_calls([
- call(X=[5, ], Y=[0.0, ], env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["weights_norm/fc1/weight"]['opts'], name="weights_norm/fc1/weight"),
- call(X=[5, ], Y=[0.0, ], env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["weights_norm/fc1/bias"]['opts'], name="weights_norm/fc1/bias"),
-
- call(X=[5, ], Y=[12.0, ], env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["weights_norm/fc2/weight"]['opts'], name="weights_norm/fc2/weight"),
- call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["weights_norm/fc2/bias"]['opts'], name="weights_norm/fc2/bias"),
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- ], any_order=True)
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ assert mock_logger.vis.line.call_count == 4
+ mock_logger.vis.line.assert_has_calls([
+ call(X=[5, ], Y=[0.0, ], env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "weights_norm/fc1/weight"]['opts'],
+ name=tag_prefix + "weights_norm/fc1/weight"),
+ call(X=[5, ], Y=[0.0, ], env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "weights_norm/fc1/bias"]['opts'],
+ name=tag_prefix + "weights_norm/fc1/bias"),
+
+ call(X=[5, ], Y=[12.0, ], env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "weights_norm/fc2/weight"]['opts'],
+ name=tag_prefix + "weights_norm/fc2/weight"),
+ call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "weights_norm/fc2/bias"]['opts'],
+ name=tag_prefix + "weights_norm/fc2/bias"),
+ ], any_order=True)
+
+ _test()
+ _test(tag="tag")
def test_weights_scalar_handler_custom_reduction():
@@ -502,35 +512,44 @@ def test_grads_scalar_handler():
def norm(x):
return 0.0
- wrapper = GradsScalarHandler(model, reduction=norm)
- mock_logger = MagicMock(spec=VisdomLogger)
- mock_logger.vis = MagicMock()
- mock_logger.executor = _DummyExecutor()
+ # define test wrapper to test with and without optional tag
+ def _test(tag=None):
+ wrapper = GradsScalarHandler(model, reduction=norm, tag=tag)
+ mock_logger = MagicMock(spec=VisdomLogger)
+ mock_logger.vis = MagicMock()
+ mock_logger.executor = _DummyExecutor()
- mock_engine = MagicMock()
- mock_engine.state = State()
- mock_engine.state.epoch = 5
-
- wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
+ mock_engine = MagicMock()
+ mock_engine.state = State()
+ mock_engine.state.epoch = 5
- assert mock_logger.vis.line.call_count == 4
-
- mock_logger.vis.line.assert_has_calls([
- call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["grads_norm/fc1/weight"]['opts'], name="grads_norm/fc1/weight"),
- call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["grads_norm/fc1/bias"]['opts'], name="grads_norm/fc1/bias"),
-
- call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["grads_norm/fc2/weight"]['opts'], name="grads_norm/fc2/weight"),
- call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
- win=None, update=None,
- opts=wrapper.windows["grads_norm/fc2/bias"]['opts'], name="grads_norm/fc2/bias"),
+ wrapper(mock_engine, mock_logger, Events.EPOCH_STARTED)
- ], any_order=True)
+ tag_prefix = "{}/".format(tag) if tag else ""
+
+ assert mock_logger.vis.line.call_count == 4
+ mock_logger.vis.line.assert_has_calls([
+ call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "grads_norm/fc1/weight"]['opts'],
+ name=tag_prefix + "grads_norm/fc1/weight"),
+ call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "grads_norm/fc1/bias"]['opts'],
+ name=tag_prefix + "grads_norm/fc1/bias"),
+
+ call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "grads_norm/fc2/weight"]['opts'],
+ name=tag_prefix + "grads_norm/fc2/weight"),
+ call(X=[5, ], Y=ANY, env=mock_logger.vis.env,
+ win=None, update=None,
+ opts=wrapper.windows[tag_prefix + "grads_norm/fc2/bias"]['opts'],
+ name=tag_prefix + "grads_norm/fc2/bias"),
+ ], any_order=True)
+
+ _test()
+ _test(tag="tag")
def test_integration_no_server():
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 3
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"numpy",
"mock",
"pytest",
"codecov",
"pytest-cov",
"matplotlib",
"pandas",
"gym",
"tqdm",
"scikit-learn",
"tensorboardX",
"visdom",
"polyaxon-client",
"mlflow"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alembic==1.7.7
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
charset-normalizer==2.0.12
click==7.1.2
cloudpickle==2.2.1
codecov==2.1.13
coverage==6.2
cycler==0.11.0
databricks-cli==0.17.8
dataclasses==0.8
decorator==4.4.2
docker==5.0.3
entrypoints==0.4
Flask==1.1.4
gitdb==4.0.9
GitPython==3.1.18
greenlet==2.0.2
gunicorn==21.2.0
gym==0.26.2
gym-notices==0.0.8
hestia==0.6.0
idna==3.10
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
importlib-resources==5.4.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
itsdangerous==1.1.0
Jinja2==2.10.3
joblib==1.1.1
jsonpatch==1.32
jsonpointer==2.3
kiwisolver==1.3.1
Mako==1.1.6
MarkupSafe==2.0.1
marshmallow==3.0.0rc5
matplotlib==3.3.4
mlflow==1.23.1
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
networkx==2.5.1
numpy==1.19.5
oauthlib==3.2.2
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
Pillow==8.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
polyaxon-client==0.6.1
polyaxon-schemas==0.6.1
polystores==0.2.5
prometheus-client==0.17.1
prometheus_flask_exporter==0.23.2
protobuf==4.21.0
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
PyJWT==2.4.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
-e git+https://github.com/pytorch/ignite.git@024667f9265610e718a6a974e7c4f6b532ea6fd3#egg=pytorch_ignite
pytz==2025.2
PyYAML==6.0.1
querystring-parser==1.2.4
requests==2.27.1
requests-toolbelt==1.0.0
rhea==0.5.5
scikit-learn==0.24.2
scipy==1.5.4
six==1.17.0
smmap==5.0.0
SQLAlchemy==1.4.54
sqlparse==0.4.4
tabulate==0.8.10
tensorboardX==2.6.2.2
threadpoolctl==3.1.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
torch==1.10.2
tornado==6.1
tqdm==4.64.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
visdom==0.2.4
websocket-client==1.3.1
Werkzeug==1.0.1
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: ignite
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alembic==1.7.7
- charset-normalizer==2.0.12
- click==7.1.2
- cloudpickle==2.2.1
- codecov==2.1.13
- coverage==6.2
- cycler==0.11.0
- databricks-cli==0.17.8
- dataclasses==0.8
- decorator==4.4.2
- docker==5.0.3
- entrypoints==0.4
- flask==1.1.4
- gitdb==4.0.9
- gitpython==3.1.18
- greenlet==2.0.2
- gunicorn==21.2.0
- gym==0.26.2
- gym-notices==0.0.8
- hestia==0.6.0
- idna==3.10
- importlib-resources==5.4.0
- itsdangerous==1.1.0
- jinja2==2.10.3
- joblib==1.1.1
- jsonpatch==1.32
- jsonpointer==2.3
- kiwisolver==1.3.1
- mako==1.1.6
- markupsafe==2.0.1
- marshmallow==3.0.0rc5
- matplotlib==3.3.4
- mlflow==1.23.1
- mock==5.2.0
- networkx==2.5.1
- numpy==1.19.5
- oauthlib==3.2.2
- pandas==1.1.5
- pillow==8.4.0
- polyaxon-client==0.6.1
- polyaxon-schemas==0.6.1
- polystores==0.2.5
- prometheus-client==0.17.1
- prometheus-flask-exporter==0.23.2
- protobuf==4.21.0
- psutil==7.0.0
- pyjwt==2.4.0
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- querystring-parser==1.2.4
- requests==2.27.1
- requests-toolbelt==1.0.0
- rhea==0.5.5
- scikit-learn==0.24.2
- scipy==1.5.4
- six==1.17.0
- smmap==5.0.0
- sqlalchemy==1.4.54
- sqlparse==0.4.4
- tabulate==0.8.10
- tensorboardx==2.6.2.2
- threadpoolctl==3.1.0
- tomli==1.2.3
- torch==1.10.2
- tornado==6.1
- tqdm==4.64.1
- urllib3==1.26.20
- visdom==0.2.4
- websocket-client==1.3.1
- werkzeug==1.0.1
prefix: /opt/conda/envs/ignite
| [
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_scalar_handler",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_hist_handler",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_scalar_handler",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_hist_handler",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_weights_scalar_handler",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_grads_scalar_handler"
] | [] | [
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_optimizer_params_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_optimizer_params",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_with_wrong_logger_type",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_output_transform",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_metric_names",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_both",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_with_wrong_global_step_transform_output",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_output_handler_with_global_step_transform",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_scalar_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_scalar_handler_frozen_layers",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_hist_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_weights_hist_handler_frozen_layers",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_scalar_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_scalar_handler_frozen_layers",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_hist_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_grads_hist_frozen_layers",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_integration",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_integration_as_context_manager",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_no_tensorboardX",
"tests/ignite/contrib/handlers/test_tensorboard_logger.py::test_init_typeerror_exception",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_optimizer_params_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_optimizer_params",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_with_wrong_logger_type",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_output_transform",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_metric_names",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_both",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_with_wrong_global_step_transform_output",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_output_handler_with_global_step_transform",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_weights_scalar_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_weights_scalar_handler_custom_reduction",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_grads_scalar_handler_wrong_setup",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_integration_no_server",
"tests/ignite/contrib/handlers/test_visdom_logger.py::test_no_visdom"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,328 | 2,416 | [
"ignite/contrib/handlers/base_logger.py",
"ignite/contrib/handlers/tensorboard_logger.py",
"ignite/contrib/handlers/visdom_logger.py"
] |
bst-mug__acres-109 | 1e9170794b5263d9add5d0ab6891754b5dac1bca | 2019-09-05 11:28:03 | 1e9170794b5263d9add5d0ab6891754b5dac1bca | diff --git a/acres/rater/full.py b/acres/rater/full.py
index 0d03352..f9421dd 100644
--- a/acres/rater/full.py
+++ b/acres/rater/full.py
@@ -1,6 +1,7 @@
"""
Rating submodule for full form checks.
"""
+from acres.util import acronym as acro_util
from acres.util import functions
@@ -55,6 +56,19 @@ def _has_capitals(full: str) -> bool:
return True
+def _contain_acronym(full: str) -> bool:
+ """
+
+ :param full:
+ :return:
+ """
+ words = full.split()
+ for word in words:
+ if acro_util.is_acronym(word):
+ return True
+ return False
+
+
def _compute_full_valid(full: str) -> int:
"""
[For internal use only] Compute all checks on full forms.
@@ -77,6 +91,9 @@ def _compute_full_valid(full: str) -> int:
if not _has_capitals(full):
ret += 8
+ if _contain_acronym(full):
+ ret += 16
+
return ret
| [Filtering] Expansion should not contain another acronym
E.g. ACC => "ACI und ACE" has been provided by the n-gram model, but not filtered out. | bst-mug/acres | diff --git a/tests/rater/test_full.py b/tests/rater/test_full.py
index 55a18aa..eeb713a 100644
--- a/tests/rater/test_full.py
+++ b/tests/rater/test_full.py
@@ -1,6 +1,16 @@
from acres.rater import full
+def test__contain_acronym():
+ # Baseline
+ assert not full._contain_acronym("Elektrokardiogramm")
+ assert full._contain_acronym("VSM Bypass")
+ assert full._contain_acronym("Gamma GT")
+
+ # Only acronym
+ assert full._contain_acronym("EKG")
+
+
def test__compute_full_valid():
# Full form has parenthesis
assert 1 == full._compute_full_valid("Abcde(fghi")
@@ -12,4 +22,7 @@ def test__compute_full_valid():
assert 4 == full._compute_full_valid("Auf das")
# Full form has no capitals
- assert 8 == full._compute_full_valid("ambulanz")
\ No newline at end of file
+ assert 8 == full._compute_full_valid("ambulanz")
+
+ # Full form has no capitals
+ assert 16 == full._compute_full_valid("VSM Bypass")
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/bst-mug/acres.git@1e9170794b5263d9add5d0ab6891754b5dac1bca#egg=acres
alabaster==0.7.13
astroid==2.15.8
Babel==2.14.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
click==8.1.8
coverage==7.2.7
cycler==0.11.0
dill==0.3.7
docutils==0.19
exceptiongroup==1.2.2
fonttools==4.38.0
gensim==4.2.0
html2text==2020.1.16
idna==3.10
imagesize==1.4.1
importlib-metadata==6.7.0
iniconfig==2.0.0
isort==5.11.5
Jinja2==3.1.6
joblib==1.3.2
kiwisolver==1.4.5
lazy-object-proxy==1.9.0
Levenshtein==0.23.0
MarkupSafe==2.1.5
matplotlib==3.5.3
mccabe==0.7.0
mypy==1.4.1
mypy-extensions==1.0.0
nltk==3.8.1
numpy==1.21.6
packaging==24.0
Pillow==9.5.0
platformdirs==4.0.0
pluggy==1.2.0
Pygments==2.17.2
pylint==2.17.7
pyparsing==3.1.4
pytest==7.4.4
pytest-cov==4.1.0
python-coveralls==2.9.3
python-dateutil==2.9.0.post0
python-Levenshtein==0.23.0
pytz==2025.2
PyYAML==6.0.1
rapidfuzz==3.4.0
regex==2024.4.16
requests==2.31.0
scikit-learn==1.0.2
scipy==1.7.3
six==1.17.0
smart-open==7.1.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-autodoc-typehints==1.23.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
threadpoolctl==3.1.0
tomli==2.0.1
tomlkit==0.12.5
tqdm==4.67.1
typed-ast==1.5.5
typing_extensions==4.7.1
urllib3==2.0.7
wrapt==1.16.0
zipp==3.15.0
| name: acres
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- astroid==2.15.8
- babel==2.14.0
- charset-normalizer==3.4.1
- click==8.1.8
- coverage==7.2.7
- cycler==0.11.0
- dill==0.3.7
- docutils==0.19
- exceptiongroup==1.2.2
- fonttools==4.38.0
- gensim==4.2.0
- html2text==2020.1.16
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- isort==5.11.5
- jinja2==3.1.6
- joblib==1.3.2
- kiwisolver==1.4.5
- lazy-object-proxy==1.9.0
- levenshtein==0.23.0
- markupsafe==2.1.5
- matplotlib==3.5.3
- mccabe==0.7.0
- mypy==1.4.1
- mypy-extensions==1.0.0
- nltk==3.8.1
- numpy==1.21.6
- packaging==24.0
- pillow==9.5.0
- platformdirs==4.0.0
- pluggy==1.2.0
- pygments==2.17.2
- pylint==2.17.7
- pyparsing==3.1.4
- pytest==7.4.4
- pytest-cov==4.1.0
- python-coveralls==2.9.3
- python-dateutil==2.9.0.post0
- python-levenshtein==0.23.0
- pytz==2025.2
- pyyaml==6.0.1
- rapidfuzz==3.4.0
- regex==2024.4.16
- requests==2.31.0
- scikit-learn==1.0.2
- scipy==1.7.3
- six==1.17.0
- smart-open==7.1.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinx-autodoc-typehints==1.23.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- threadpoolctl==3.1.0
- tomli==2.0.1
- tomlkit==0.12.5
- tqdm==4.67.1
- typed-ast==1.5.5
- typing-extensions==4.7.1
- urllib3==2.0.7
- wrapt==1.16.0
- zipp==3.15.0
prefix: /opt/conda/envs/acres
| [
"tests/rater/test_full.py::test__contain_acronym",
"tests/rater/test_full.py::test__compute_full_valid"
] | [] | [] | [] | Apache License 2.0 | 5,334 | 285 | [
"acres/rater/full.py"
] |
|
google__mobly-626 | 4541f3a65fe6631fe3e9621b620472ce0ce66602 | 2019-09-07 02:33:51 | 2587dce6ac5c762e02503115fc979d0607acb9eb | diff --git a/mobly/controllers/android_device.py b/mobly/controllers/android_device.py
index cb9ab52..c3197ca 100644
--- a/mobly/controllers/android_device.py
+++ b/mobly/controllers/android_device.py
@@ -18,6 +18,7 @@ from past.builtins import basestring
import contextlib
import logging
import os
+import re
import shutil
import time
@@ -52,8 +53,10 @@ CACHED_SYSTEM_PROPS = [
'ro.build.version.codename',
'ro.build.version.sdk',
'ro.build.product',
+ 'ro.build.characteristics',
'ro.debuggable',
'ro.product.name',
+ 'ro.hardware',
]
# Keys for attributes in configs that alternate the controller module behavior.
@@ -78,6 +81,9 @@ Error = errors.Error
DeviceError = errors.DeviceError
SnippetError = snippet_management_service.Error
+# Regex to heuristically determine if the device is an emulator.
+EMULATOR_SERIAL_REGEX = re.compile(r'emulator-\d+')
+
def create(configs):
"""Creates AndroidDevice controller objects.
@@ -338,7 +344,6 @@ def get_devices(ads, **kwargs):
Raises:
Error: No devices are matched.
"""
-
def _get_device_filter(ad):
for k, v in kwargs.items():
if not hasattr(ad, k):
@@ -443,7 +448,6 @@ class AndroidDevice(object):
services: ServiceManager, the manager of long-running services on the
device.
"""
-
def __init__(self, serial=''):
self._serial = str(serial)
# logging.log_path only exists when this is used in an Mobly test run.
@@ -763,8 +767,11 @@ class AndroidDevice(object):
info['build_version_sdk'] = build_info.get('ro.build.version.sdk',
'')
info['build_product'] = build_info.get('ro.build.product', '')
+ info['build_characteristics'] = build_info.get(
+ 'ro.build.characteristics', '')
info['debuggable'] = build_info.get('ro.debuggable', '')
info['product_name'] = build_info.get('ro.product.name', '')
+ info['hardware'] = build_info.get('ro.hardware', '')
self._build_info = info
return info
return self._build_info
@@ -810,6 +817,31 @@ class AndroidDevice(object):
return model
return self.build_info['product_name'].lower()
+ @property
+ def is_emulator(self):
+ """Whether this device is probably an emulator.
+
+ Returns:
+ True if this is probably an emulator.
+ """
+ if EMULATOR_SERIAL_REGEX.match(self.serial):
+ # If the device's serial follows 'emulator-dddd', then it's almost
+ # certainly an emulator.
+ return True
+ elif self.build_info['build_characteristics'] == 'emulator':
+ # If the device says that it's an emulator, then it's probably an
+ # emulator although some real devices apparently report themselves
+ # as emulators in addition to other things, so only return True on
+ # an exact match.
+ return True
+ elif self.build_info['hardware'] in ['ranchu', 'goldfish']:
+ # Ranchu and Goldfish are the hardware properties that the AOSP
+ # emulators report, so if the device says it's an AOSP emulator, it
+ # probably is one.
+ return True
+ else:
+ return False
+
def load_config(self, config):
"""Add attributes to the AndroidDevice object based on config.
@@ -1049,7 +1081,6 @@ class AndroidDeviceLoggerAdapter(logging.LoggerAdapter):
Then each log line added by my_log will have a prefix
'[AndroidDevice|<tag>]'
"""
-
def process(self, msg, kwargs):
msg = _DEBUG_PREFIX_TEMPLATE % (self.extra['tag'], msg)
return (msg, kwargs)
| Add is_emulator property to AndroidDevice | google/mobly | diff --git a/tests/lib/mock_android_device.py b/tests/lib/mock_android_device.py
index 959623a..28c90e6 100755
--- a/tests/lib/mock_android_device.py
+++ b/tests/lib/mock_android_device.py
@@ -30,6 +30,8 @@ DEFAULT_MOCK_PROPERTIES = {
'ro.product.name': 'FakeModel',
'ro.debuggable': '1',
'sys.boot_completed': "1",
+ 'ro.build.characteristics': 'emulator,phone',
+ 'ro.hardware': 'marlin',
}
@@ -76,7 +78,6 @@ def list_adb_devices():
class MockAdbProxy(object):
"""Mock class that swaps out calls to adb with mock calls."""
-
def __init__(self,
serial='',
fail_br=False,
@@ -115,13 +116,16 @@ class MockAdbProxy(object):
packages = self.installed_packages + [
package for package, _, _ in self.instrumented_packages
]
- return bytes('\n'.join(
- ['package:%s' % package for package in packages]), 'utf-8')
+ return bytes(
+ '\n'.join(['package:%s' % package for package in packages]),
+ 'utf-8')
elif 'pm list instrumentation' in params:
- return bytes('\n'.join([
- 'instrumentation:%s/%s (target=%s)' % (package, runner, target)
- for package, runner, target in self.instrumented_packages
- ]), 'utf-8')
+ return bytes(
+ '\n'.join([
+ 'instrumentation:%s/%s (target=%s)' %
+ (package, runner, target)
+ for package, runner, target in self.instrumented_packages
+ ]), 'utf-8')
elif 'which' in params:
return b''
@@ -144,7 +148,6 @@ class MockAdbProxy(object):
"""All calls to the none-existent functions in adb proxy would
simply return the adb command string.
"""
-
def adb_call(*args, **kwargs):
arg_str = ' '.join(str(elem) for elem in args)
return arg_str
@@ -154,7 +157,6 @@ class MockAdbProxy(object):
class MockFastbootProxy(object):
"""Mock class that swaps out calls to adb with mock calls."""
-
def __init__(self, serial):
self.serial = serial
diff --git a/tests/mobly/controllers/android_device_test.py b/tests/mobly/controllers/android_device_test.py
index 4438490..f3f8e0c 100755
--- a/tests/mobly/controllers/android_device_test.py
+++ b/tests/mobly/controllers/android_device_test.py
@@ -44,7 +44,6 @@ class AndroidDeviceTest(unittest.TestCase):
"""This test class has unit tests for the implementation of everything
under mobly.controllers.android_device.
"""
-
def setUp(self):
# Set log_path to logging since mobly logger setup is not called.
if not hasattr(logging, 'log_path'):
@@ -280,8 +279,10 @@ class AndroidDeviceTest(unittest.TestCase):
self.assertEqual(build_info['build_version_codename'], 'Z')
self.assertEqual(build_info['build_version_sdk'], '28')
self.assertEqual(build_info['build_product'], 'FakeModel')
+ self.assertEqual(build_info['build_characteristics'], 'emulator,phone')
self.assertEqual(build_info['product_name'], 'FakeModel')
self.assertEqual(build_info['debuggable'], '1')
+ self.assertEqual(build_info['hardware'], 'marlin')
self.assertEqual(len(build_info),
len(android_device.CACHED_SYSTEM_PROPS))
@@ -303,8 +304,10 @@ class AndroidDeviceTest(unittest.TestCase):
self.assertEqual(build_info['build_version_codename'], '')
self.assertEqual(build_info['build_version_sdk'], '')
self.assertEqual(build_info['build_product'], '')
+ self.assertEqual(build_info['build_characteristics'], '')
self.assertEqual(build_info['product_name'], '')
self.assertEqual(build_info['debuggable'], '')
+ self.assertEqual(build_info['hardware'], '')
@mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
return_value=mock_android_device.MockAdbProxy('1'))
@@ -384,6 +387,95 @@ class AndroidDeviceTest(unittest.TestCase):
self.assertTrue(isinstance(ad.serial, str))
yaml.safe_dump(ad.serial)
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy('1'))
+ @mock.patch('mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('1'))
+ def test_AndroidDevice_is_emulator_when_realish_device(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='1')
+ self.assertFalse(ad.is_emulator)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy('localhost:123'))
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('localhost:123'))
+ def test_AndroidDevice_is_emulator_when_local_networked_device(
+ self, MockFastboot, MockAdbProxy):
+ # Although these devices are usually emulators, there might be a reason
+ # to do this with a real device.
+ ad = android_device.AndroidDevice(serial='localhost:123')
+ self.assertFalse(ad.is_emulator)
+
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy('example.com:123'))
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('example:123'))
+ def test_AndroidDevice_is_emulator_when_remote_networked_device(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='example.com:123')
+ self.assertFalse(ad.is_emulator)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy(
+ 'localhost:5554',
+ mock_properties={
+ 'ro.hardware': 'ranchu',
+ 'ro.build.id': 'AB42',
+ 'ro.build.type': 'userdebug',
+ }))
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('localhost:5554'))
+ def test_AndroidDevice_is_emulator_when_ranchu_device(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='localhost:5554')
+ self.assertTrue(ad.is_emulator)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy(
+ '1',
+ mock_properties={
+ 'ro.build.id': 'AB42',
+ 'ro.build.type': 'userdebug',
+ 'ro.hardware': 'goldfish',
+ }))
+ @mock.patch('mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('1'))
+ def test_AndroidDevice_is_emulator_when_goldfish_device(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='1')
+ self.assertTrue(ad.is_emulator)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy(
+ 'example.com:123',
+ mock_properties={
+ 'ro.build.id': 'AB42',
+ 'ro.build.type': 'userdebug',
+ 'ro.build.characteristics': 'emulator',
+ }))
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('example.com:123'))
+ def test_AndroidDevice_is_emulator_when_emulator_characteristic(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='example.com:123')
+ self.assertTrue(ad.is_emulator)
+
+ @mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
+ return_value=mock_android_device.MockAdbProxy('emulator-5554'))
+ @mock.patch(
+ 'mobly.controllers.android_device_lib.fastboot.FastbootProxy',
+ return_value=mock_android_device.MockFastbootProxy('emulator-5554'))
+ def test_AndroidDevice_is_emulator_when_emulator_serial(
+ self, MockFastboot, MockAdbProxy):
+ ad = android_device.AndroidDevice(serial='emulator-5554')
+ self.assertTrue(ad.is_emulator)
+
@mock.patch('mobly.controllers.android_device_lib.adb.AdbProxy',
return_value=mock_android_device.MockAdbProxy(1))
@mock.patch('mobly.controllers.android_device_lib.fastboot.FastbootProxy',
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 1
} | 1.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"mock",
"pytz"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.8",
"reqs_path": [
"docs/requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
babel==2.17.0
certifi==2025.1.31
charset-normalizer==3.4.1
commonmark==0.9.1
docutils==0.20.1
exceptiongroup==1.2.2
future==1.0.0
idna==3.10
imagesize==1.4.1
importlib_metadata==8.5.0
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==2.1.5
-e git+https://github.com/google/mobly.git@4541f3a65fe6631fe3e9621b620472ce0ce66602#egg=mobly
mock==5.2.0
packaging==24.2
pluggy==1.5.0
portpicker==1.6.0
psutil==7.0.0
Pygments==2.19.1
pyserial==3.5
pytest==8.3.5
pytz==2025.2
PyYAML==6.0.2
recommonmark==0.7.1
requests==2.32.3
snowballstemmer==2.2.0
Sphinx==7.1.2
sphinxcontrib-applehelp==1.0.4
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
timeout-decorator==0.5.0
tomli==2.2.1
urllib3==2.2.3
zipp==3.20.2
| name: mobly
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=24.2=py38h06a4308_0
- python=3.8.20=he870216_0
- readline=8.2=h5eee18b_0
- setuptools=75.1.0=py38h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.44.0=py38h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- babel==2.17.0
- certifi==2025.1.31
- charset-normalizer==3.4.1
- commonmark==0.9.1
- docutils==0.20.1
- exceptiongroup==1.2.2
- future==1.0.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.5.0
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==2.1.5
- mock==5.2.0
- packaging==24.2
- pluggy==1.5.0
- portpicker==1.6.0
- psutil==7.0.0
- pygments==2.19.1
- pyserial==3.5
- pytest==8.3.5
- pytz==2025.2
- pyyaml==6.0.2
- recommonmark==0.7.1
- requests==2.32.3
- snowballstemmer==2.2.0
- sphinx==7.1.2
- sphinxcontrib-applehelp==1.0.4
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- timeout-decorator==0.5.0
- tomli==2.2.1
- urllib3==2.2.3
- zipp==3.20.2
prefix: /opt/conda/envs/mobly
| [
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_build_info",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_build_info_with_minimal_properties",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_emulator_characteristic",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_emulator_serial",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_goldfish_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_local_networked_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_ranchu_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_realish_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_emulator_when_remote_networked_device"
] | [] | [
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_build_info_cached",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_change_log_path",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_change_log_path_no_log_exists",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_change_log_path_with_existing_file",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_change_log_path_with_service",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_debug_tag",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_device_info",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_getattr",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_handle_reboot",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_handle_reboot_changes_build_info",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_handle_reboot_changes_build_info_with_caching",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_handle_usb_disconnect",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_instantiation",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_rootable_when_user_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_is_rootable_when_userdebug_device",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_load_snippet",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_load_snippet_dup_attribute_name",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_load_snippet_dup_package",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_load_snippet_dup_snippet_name",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_load_snippet_start_app_fails",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_serial_is_valid",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_snippet_cleanup",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_fail",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_fallback",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_with_destination",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_with_only_begin_time",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_with_only_test_name",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_with_positional_args",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_take_bug_report_without_args",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_unload_snippet",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_update_serial",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_update_serial_with_service_running",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_wait_for_completion_completed",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_wait_for_completion_never_boot",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_AndroidDevice_with_reserved_character_in_serial_log_path",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_dict_list",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_empty_config",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_no_valid_config",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_not_list_config",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_pickup_all",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_string_list",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_create_with_usb_id",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_device_no_match",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_device_success_with_serial",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_device_success_with_serial_and_extra_field",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_device_too_many_matches",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_devices_no_match",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_get_devices_success_with_extra_field",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_start_services_on_ads",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_start_services_on_ads_skip_logcat",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_take_bug_reports",
"tests/mobly/controllers/android_device_test.py::AndroidDeviceTest::test_take_bug_reports_with_none_values"
] | [] | Apache License 2.0 | 5,346 | 923 | [
"mobly/controllers/android_device.py"
] |
|
m0nhawk__grafana_api-43 | b5f1266273fb836580b03224456843b043089814 | 2019-09-09 02:16:17 | badb7c06944ce3f708050a0a14dea69f105d3a87 | diff --git a/grafana_api/grafana_api.py b/grafana_api/grafana_api.py
index e965768..bb7cc5e 100644
--- a/grafana_api/grafana_api.py
+++ b/grafana_api/grafana_api.py
@@ -56,9 +56,11 @@ class GrafanaAPI:
url_path_prefix="",
protocol="http",
verify=True,
+ timeout=5.0,
):
self.auth = auth
self.verify = verify
+ self.timeout = timeout
self.url_host = host
self.url_port = port
self.url_path_prefix = url_path_prefix
@@ -92,7 +94,7 @@ class GrafanaAPI:
__url = "%s%s" % (self.url, url)
runner = getattr(self.s, item.lower())
r = runner(
- __url, json=json, headers=headers, auth=self.auth, verify=self.verify
+ __url, json=json, headers=headers, auth=self.auth, verify=self.verify, timeout=self.timeout
)
if 500 <= r.status_code < 600:
raise GrafanaServerError(
diff --git a/grafana_api/grafana_face.py b/grafana_api/grafana_face.py
index dcc8667..f9fe53b 100644
--- a/grafana_api/grafana_face.py
+++ b/grafana_api/grafana_face.py
@@ -24,6 +24,7 @@ class GrafanaFace:
url_path_prefix="",
protocol="http",
verify=True,
+ timeout=5.0,
):
self.api = GrafanaAPI(
auth,
@@ -32,6 +33,7 @@ class GrafanaFace:
url_path_prefix=url_path_prefix,
protocol=protocol,
verify=verify,
+ timeout=timeout,
)
self.admin = Admin(self.api)
self.dashboard = Dashboard(self.api)
| Missing timeouts
**Describe the bug**
The requests never timeout, this is not a good idea in general
**Expected behavior**
The user should be able to set one and there should be a default (maybe 10s) | m0nhawk/grafana_api | diff --git a/test/test_grafana.py b/test/test_grafana.py
index d4affad..1c8ede3 100644
--- a/test/test_grafana.py
+++ b/test/test_grafana.py
@@ -67,8 +67,22 @@ class TestGrafanaAPI(unittest.TestCase):
headers=None,
json=None,
verify=False,
+ timeout=5.0,
)
+ def test_grafana_api_timeout(self):
+ cli = GrafanaFace(
+ ("admin", "admin"),
+ host="play.grafana.org",
+ url_path_prefix="",
+ protocol="https",
+ verify=False,
+ timeout=0.0001
+ )
+
+ with self.assertRaises(requests.exceptions.Timeout):
+ cli.folder.get_all_folders()
+
def test_grafana_api_basic_auth(self):
cli = GrafanaFace(
("admin", "admin"), host="localhost", url_path_prefix="", protocol="https",port="3000"
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 2
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"unittest-xml-reporting",
"pytest"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
-e git+https://github.com/m0nhawk/grafana_api.git@b5f1266273fb836580b03224456843b043089814#egg=grafana_api
idna==3.10
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1648562407465/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
lxml==5.3.1
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pytest==7.1.2
PyYAML==6.0.1
requests==2.31.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
unittest-xml-reporting==3.2.0
urllib3==2.0.7
zipp @ file:///croot/zipp_1672387121353/work
| name: grafana_api
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib-metadata=4.11.3=py37h06a4308_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- charset-normalizer==3.4.1
- idna==3.10
- lxml==5.3.1
- pyyaml==6.0.1
- requests==2.31.0
- unittest-xml-reporting==3.2.0
- urllib3==2.0.7
prefix: /opt/conda/envs/grafana_api
| [
"test/test_grafana.py::TestGrafanaAPI::test_grafana_api_no_verify"
] | [
"test/test_grafana.py::TestGrafanaAPI::test_grafana_api_timeout"
] | [
"test/test_grafana.py::TestGrafanaAPI::test_grafana_api",
"test/test_grafana.py::TestGrafanaAPI::test_grafana_api_basic_auth",
"test/test_grafana.py::TestGrafanaAPI::test_grafana_api_token_auth"
] | [] | MIT License | 5,353 | 450 | [
"grafana_api/grafana_api.py",
"grafana_api/grafana_face.py"
] |
|
iterative__dvc-2481 | 9638b454d793ba3078843bef7f610c067bf6b5dd | 2019-09-09 16:44:58 | 70efa6c52acc087750ab80c2e3ff0e06a290b766 | diff --git a/dvc/scm/base.py b/dvc/scm/base.py
index a1b2bd4fa..d7ad9d2a5 100644
--- a/dvc/scm/base.py
+++ b/dvc/scm/base.py
@@ -23,11 +23,6 @@ class FileNotInCommitError(SCMError):
"""
-class FileNotInTargetSubdirError(SCMError):
- """Thrown when trying to place .gitignore for a file that not in
- the file subdirectory."""
-
-
class CloneError(SCMError):
def __init__(self, url, path, cause):
super(CloneError, self).__init__(
diff --git a/dvc/scm/git/__init__.py b/dvc/scm/git/__init__.py
index 6afe2973b..2b9a67938 100644
--- a/dvc/scm/git/__init__.py
+++ b/dvc/scm/git/__init__.py
@@ -11,7 +11,6 @@ from dvc.scm.base import (
Base,
SCMError,
FileNotInRepoError,
- FileNotInTargetSubdirError,
CloneError,
RevError,
)
@@ -104,22 +103,12 @@ class Git(Base):
def ignore_file(self):
return self.GITIGNORE
- def _get_gitignore(self, path, ignore_file_dir=None):
- if not ignore_file_dir:
- ignore_file_dir = os.path.dirname(os.path.realpath(path))
+ def _get_gitignore(self, path):
+ ignore_file_dir = os.path.dirname(path)
assert os.path.isabs(path)
assert os.path.isabs(ignore_file_dir)
- if not path.startswith(ignore_file_dir):
- msg = (
- "{} file has to be located in one of '{}' subdirectories"
- ", not outside '{}'"
- )
- raise FileNotInTargetSubdirError(
- msg.format(self.GITIGNORE, path, ignore_file_dir)
- )
-
entry = relpath(path, ignore_file_dir).replace(os.sep, "/")
# NOTE: using '/' prefix to make path unambiguous
if len(entry) > 0 and entry[0] != "/":
@@ -143,8 +132,7 @@ class Git(Base):
return False
def ignore(self, path):
- base_dir = os.path.dirname(path)
- entry, gitignore = self._get_gitignore(path, base_dir)
+ entry, gitignore = self._get_gitignore(path)
if self._ignored(entry, gitignore):
return
| move: does not work with symlinks
**Please provide information about your setup**
DVC version(i.e. `dvc --version`), Platform and method of installation (pip, homebrew, pkg Mac, exe (Windows), DEB(Linux), RPM(Linux))
dvc version 0.57.0
Ubuntu 16.04 pip.
@Ruslan at discord said that this link should be enough:
https://discordapp.com/channels/485586884165107732/485596304961962003/620590523144470557
| iterative/dvc | diff --git a/tests/func/test_scm.py b/tests/func/test_scm.py
index ada5b4759..6c8d60276 100644
--- a/tests/func/test_scm.py
+++ b/tests/func/test_scm.py
@@ -2,12 +2,11 @@ from __future__ import unicode_literals
import os
-import pytest
from git import Repo
+from dvc.system import System
from dvc.utils.compat import str # noqa: F401
from dvc.scm import SCM, NoSCM, Git
-from dvc.scm.base import FileNotInTargetSubdirError
from tests.basic_env import TestDir, TestGit, TestGitSubmodule
from tests.utils import get_gitignore_content
@@ -97,6 +96,14 @@ class TestIgnore(object):
assert entry == "/dir"
assert gitignore == os.path.join(repo_dir._root_dir, Git.GITIGNORE)
+ def test_get_gitignore_symlink(self, git, repo_dir):
+ link = os.path.join(repo_dir.root_dir, "link")
+ target = os.path.join(repo_dir.root_dir, repo_dir.DATA_SUB)
+ System.symlink(target, link)
+ entry, gitignore = Git(repo_dir._root_dir)._get_gitignore(link)
+ assert entry == "/link"
+ assert gitignore == os.path.join(repo_dir.root_dir, Git.GITIGNORE)
+
def test_get_gitignore_subdir(self, git, repo_dir):
data_dir = os.path.join(
repo_dir._root_dir, os.path.join("dir1", "file1")
@@ -116,35 +123,6 @@ class TestIgnore(object):
repo_dir._root_dir, "dir1", Git.GITIGNORE
)
- def test_get_gitignore_ignorefile_dir(self, git, repo_dir):
- git = Git(repo_dir._root_dir)
-
- file_double_dir = os.path.join("dir1", "dir2", "file1")
- data_dir1 = os.path.join(repo_dir._root_dir, file_double_dir)
- dir1_real1 = os.path.realpath("dir1")
- entry, gitignore = git._get_gitignore(data_dir1, dir1_real1)
- assert entry == "/dir2/file1"
- gitignore1 = os.path.join(repo_dir._root_dir, "dir1", Git.GITIGNORE)
- assert gitignore == gitignore1
-
- triple_dir = os.path.join("dir1", "dir2", "dir3")
- data_dir2 = os.path.join(repo_dir._root_dir, triple_dir)
- dir1_real2 = os.path.realpath("dir1")
- entry, gitignore = git._get_gitignore(data_dir2, dir1_real2)
- assert entry == "/dir2/dir3"
- gitignore2 = os.path.join(repo_dir._root_dir, "dir1", Git.GITIGNORE)
- assert gitignore == gitignore2
-
- def test_get_gitignore_ignorefile_dir_upper_level(self, git, repo_dir):
- git = Git(repo_dir._root_dir)
-
- file_double_dir = os.path.join("dir1", "dir2", "file1")
- data_dir1 = os.path.join(repo_dir._root_dir, file_double_dir)
- ignore_file_dir = os.path.realpath(os.path.join("aa", "bb"))
-
- with pytest.raises(FileNotInTargetSubdirError):
- git._get_gitignore(data_dir1, ignore_file_dir)
-
def test_gitignore_should_end_with_newline(self, git, repo_dir):
git = Git(repo_dir._root_dir)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 1
},
"num_modified_files": 2
} | 0.59 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aliyun-python-sdk-core==2.16.0
aliyun-python-sdk-core-v3==2.13.33
aliyun-python-sdk-kms==2.16.5
appdirs==1.4.4
asciimatics==1.14.0
attrs @ file:///croot/attrs_1668696182826/work
autocommand==2.2.2
awscli==1.31.13
azure-common==1.1.28
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
bcrypt==4.2.1
boto==2.49.0
boto3==1.9.115
botocore==1.12.253
cachetools==5.5.2
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
collective.checkdocs==0.2
colorama==0.4.4
configobj==5.0.9
configparser==5.3.0
coverage==7.2.7
crcmod==1.7
cryptography==44.0.2
distro==1.9.0
docutils==0.15.2
-e git+https://github.com/iterative/dvc.git@9638b454d793ba3078843bef7f610c067bf6b5dd#egg=dvc
execnet==2.0.2
flake8==5.0.4
flake8-docstrings==1.7.0
flaky==3.8.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
funcy==2.0
future==1.0.0
gitdb==4.0.12
GitPython==3.1.44
google-api-core==1.34.1
google-auth==2.38.0
google-cloud-core==0.28.1
google-cloud-storage==1.13.0
google-compute-engine==2.8.13
google-resumable-media==0.3.2
googleapis-common-protos==1.69.2
grandalf==0.6
humanize==4.6.0
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
inflect==6.0.5
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.2.3
jaraco.collections==4.2.0
jaraco.context==4.3.0
jaraco.functools==3.7.0
jaraco.structures==2.1.0
jaraco.text==3.11.1
jaraco.ui==2.3.0
jaraco.windows==5.7.0
Jinja2==3.1.6
jmespath==0.10.0
jsonpath-ng==1.7.0
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mock-ssh-server==0.9.1
more-itertools==9.1.0
moto==4.2.14
nanotime==0.5.2
networkx==2.6.3
numpy==1.21.6
oss2==2.6.1
packaging @ file:///croot/packaging_1671697413597/work
paramiko==3.5.1
path==16.6.0
pathspec==0.11.2
Pillow==9.5.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
ply==3.11
protobuf==3.20.3
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyarrow==0.14.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycodestyle==2.9.1
pycparser==2.21
pycryptodome==3.22.0
pydantic==1.10.21
pydocstyle==6.3.0
pyfiglet==0.8.post1
pyflakes==2.5.0
Pygments==2.17.2
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.1.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==6.0.1
rangehttpserver==1.4.0
requests==2.31.0
responses==0.23.3
rsa==4.7.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
s3transfer==0.2.1
schema==0.7.7
shortuuid==1.0.13
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
tqdm==4.67.1
treelib==1.7.1
types-PyYAML==6.0.12.12
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3==1.25.11
wcwidth==0.2.13
Werkzeug==2.2.3
xmltodict==0.14.2
zc.lockfile==3.0.post1
zipp @ file:///croot/zipp_1672387121353/work
| name: dvc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- aliyun-python-sdk-core==2.16.0
- aliyun-python-sdk-core-v3==2.13.33
- aliyun-python-sdk-kms==2.16.5
- appdirs==1.4.4
- asciimatics==1.14.0
- autocommand==2.2.2
- awscli==1.31.13
- azure-common==1.1.28
- azure-storage-blob==2.1.0
- azure-storage-common==2.1.0
- bcrypt==4.2.1
- boto==2.49.0
- boto3==1.9.115
- botocore==1.12.253
- cachetools==5.5.2
- cffi==1.15.1
- charset-normalizer==3.4.1
- collective-checkdocs==0.2
- colorama==0.4.4
- configobj==5.0.9
- configparser==5.3.0
- coverage==7.2.7
- crcmod==1.7
- cryptography==44.0.2
- distro==1.9.0
- docutils==0.15.2
- dvc==0.59.1
- execnet==2.0.2
- flake8==5.0.4
- flake8-docstrings==1.7.0
- flaky==3.8.1
- funcy==2.0
- future==1.0.0
- gitdb==4.0.12
- gitpython==3.1.44
- google-api-core==1.34.1
- google-auth==2.38.0
- google-cloud-core==0.28.1
- google-cloud-storage==1.13.0
- google-compute-engine==2.8.13
- google-resumable-media==0.3.2
- googleapis-common-protos==1.69.2
- grandalf==0.6
- humanize==4.6.0
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- inflect==6.0.5
- jaraco-classes==3.2.3
- jaraco-collections==4.2.0
- jaraco-context==4.3.0
- jaraco-functools==3.7.0
- jaraco-structures==2.1.0
- jaraco-text==3.11.1
- jaraco-ui==2.3.0
- jaraco-windows==5.7.0
- jinja2==3.1.6
- jmespath==0.10.0
- jsonpath-ng==1.7.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mock-ssh-server==0.9.1
- more-itertools==9.1.0
- moto==4.2.14
- nanotime==0.5.2
- networkx==2.6.3
- numpy==1.21.6
- oss2==2.6.1
- paramiko==3.5.1
- path==16.6.0
- pathspec==0.11.2
- pillow==9.5.0
- ply==3.11
- protobuf==3.20.3
- psutil==7.0.0
- pyarrow==0.14.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycodestyle==2.9.1
- pycparser==2.21
- pycryptodome==3.22.0
- pydantic==1.10.21
- pydocstyle==6.3.0
- pyfiglet==0.8.post1
- pyflakes==2.5.0
- pygments==2.17.2
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.1
- rangehttpserver==1.4.0
- requests==2.31.0
- responses==0.23.3
- rsa==4.7.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- s3transfer==0.2.1
- schema==0.7.7
- shortuuid==1.0.13
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- tqdm==4.67.1
- treelib==1.7.1
- types-pyyaml==6.0.12.12
- urllib3==1.25.11
- wcwidth==0.2.13
- werkzeug==2.2.3
- xmltodict==0.14.2
- zc-lockfile==3.0.post1
prefix: /opt/conda/envs/dvc
| [
"tests/func/test_scm.py::TestIgnore::test_get_gitignore_symlink"
] | [] | [
"tests/func/test_scm.py::TestSCM::test_git",
"tests/func/test_scm.py::TestSCM::test_none",
"tests/func/test_scm.py::TestSCMGit::test_commit",
"tests/func/test_scm.py::TestSCMGit::test_is_repo",
"tests/func/test_scm.py::TestSCMGit::test_is_tracked",
"tests/func/test_scm.py::TestSCMGitSubmodule::test_commit_in_submodule",
"tests/func/test_scm.py::TestSCMGitSubmodule::test_git_submodule",
"tests/func/test_scm.py::TestSCMGitSubmodule::test_is_submodule",
"tests/func/test_scm.py::TestIgnore::test_ignore",
"tests/func/test_scm.py::TestIgnore::test_get_gitignore",
"tests/func/test_scm.py::TestIgnore::test_get_gitignore_subdir",
"tests/func/test_scm.py::TestIgnore::test_gitignore_should_end_with_newline",
"tests/func/test_scm.py::TestIgnore::test_gitignore_should_append_newline_to_gitignore"
] | [] | Apache License 2.0 | 5,361 | 596 | [
"dvc/scm/base.py",
"dvc/scm/git/__init__.py"
] |
|
pddg__uroboros-31 | 888c00e6f082510eb80de1ae4708cc2dd3d023a0 | 2019-09-10 04:49:13 | 888c00e6f082510eb80de1ae4708cc2dd3d023a0 | diff --git a/uroboros/command.py b/uroboros/command.py
index 963cba2..1f15f8b 100644
--- a/uroboros/command.py
+++ b/uroboros/command.py
@@ -221,6 +221,13 @@ class Command(metaclass=abc.ABCMeta):
self: commands_dict,
}
+ def get_options(self) -> 'List[Option]':
+ """
+ Get all `Option` instance of this `Command`.
+ :return: List of Option instance
+ """
+ return self.options
+
def print_help(self):
"""
Helper method for print the help message of this command.
@@ -233,10 +240,25 @@ class Command(metaclass=abc.ABCMeta):
sub_commands: 'List[Command]') -> 'argparse.Namespace':
return utils.call_one_by_one(
[self] + sub_commands,
- "before_validate",
- args
+ "_hook",
+ args,
+ hook_name="before_validate"
)
+ def _hook(self,
+ args: 'argparse.Namespace',
+ hook_name: str) -> 'argparse.Namespace':
+ for opt in self.get_options():
+ assert hasattr(opt, hook_name), \
+ "{} does not have '{}' method".format(
+ opt.__class__.__name__, hook_name)
+ args = getattr(opt, hook_name)(args)
+ assert hasattr(self, hook_name), \
+ "{} does not have '{}' method".format(
+ self.__class__.__name__, hook_name)
+ args = getattr(self, hook_name)(args)
+ return args
+
def before_validate(self,
unsafe_args: 'argparse.Namespace'
) -> 'argparse.Namespace':
@@ -276,8 +298,9 @@ class Command(metaclass=abc.ABCMeta):
) -> 'argparse.Namespace':
return utils.call_one_by_one(
[self] + sub_commands,
- "after_validate",
- args
+ "_hook",
+ args,
+ hook_name='after_validate'
)
def after_validate(self,
diff --git a/uroboros/option.py b/uroboros/option.py
index 1707b2a..bd30d45 100644
--- a/uroboros/option.py
+++ b/uroboros/option.py
@@ -19,6 +19,15 @@ class Option(metaclass=abc.ABCMeta):
-> 'argparse.ArgumentParser':
raise NotImplementedError
- @abc.abstractmethod
+ def before_validate(self,
+ unsafe_args: 'argparse.Namespace'
+ ) -> 'argparse.Namespace':
+ return unsafe_args
+
def validate(self, args: 'argparse.Namespace') -> 'List[Exception]':
- raise NotImplementedError
+ raise []
+
+ def after_validate(self,
+ safe_args: 'argparse.Namespace'
+ ) -> 'argparse.Namespace':
+ return safe_args
diff --git a/uroboros/utils.py b/uroboros/utils.py
index 04b4c6e..6a90229 100644
--- a/uroboros/utils.py
+++ b/uroboros/utils.py
@@ -10,12 +10,12 @@ def get_args_section_name(layer: int):
return "__layer{layer}_parser".format(layer=layer)
-def call_one_by_one(objs, method_name: str, args):
+def call_one_by_one(objs, method_name: str, args, **kwargs):
for obj in objs:
assert hasattr(obj, method_name), \
"'{cmd}' has no method '{method}".format(
cmd=obj.__name__,
method=method_name
)
- args = getattr(obj, method_name)(args)
+ args = getattr(obj, method_name)(args, **kwargs)
return args
| Implement hook function to `uroboros.Option`
I forgot to implement it... | pddg/uroboros | diff --git a/tests/test_option.py b/tests/test_option.py
new file mode 100644
index 0000000..8292b58
--- /dev/null
+++ b/tests/test_option.py
@@ -0,0 +1,90 @@
+import argparse
+
+import pytest
+
+import uroboros
+
+
+class SampleOption(uroboros.Option):
+
+ name = 'option'
+ value = 'option'
+
+ def build_option(self,
+ parser: 'argparse.ArgumentParser'
+ ) -> 'argparse.ArgumentParser':
+ parser.add_argument('--{}'.format(self.name),
+ default=self.value,
+ type=str)
+ return parser
+
+ def validate(self,
+ args: 'argparse.Namespace'
+ ) -> 'List[Exception]':
+ if getattr(args, self.name) != self.value:
+ return [Exception("{} is expected".format(self.value))]
+ return []
+
+ def before_validate(self,
+ unsafe_args: 'argparse.Namespace'
+ ) -> 'argparse.Namespace':
+ setattr(
+ unsafe_args,
+ 'before_validate_{}'.format(self.name),
+ self.value
+ )
+ return unsafe_args
+
+ def after_validate(self, safe_args):
+ setattr(
+ safe_args,
+ 'after_validate_{}'.format(self.name),
+ self.value
+ )
+ return safe_args
+
+
+class NoHookOption(uroboros.Option):
+ name = 'nohook'
+ value = 'nohook'
+
+ def build_option(self,
+ parser: 'argparse.ArgumentParser'
+ ) -> 'argparse.ArgumentParser':
+ parser.add_argument("--{}".format(self.name), default=self.value)
+ return parser
+
+
+class TestOption(object):
+
+ def test_no_before_validate(self):
+ args = argparse.Namespace()
+ nohook = NoHookOption()
+ assert nohook.before_validate(args) == args
+
+ def test_before_hook(self):
+ args = argparse.Namespace()
+ opt = SampleOption()
+ hooked_args = opt.after_validate(args)
+ actual = getattr(
+ hooked_args, "after_validate_{}".format(opt.name))
+ assert actual == opt.value
+
+ def test_no_after_validate(self):
+ args = argparse.Namespace()
+ nohook = NoHookOption()
+ assert nohook.before_validate(args) == args
+
+ def test_after_hook(self):
+ args = argparse.Namespace()
+ opt = SampleOption()
+ hooked_args = opt.after_validate(args)
+ actual = getattr(
+ hooked_args, "after_validate_{}".format(opt.name))
+ assert actual == opt.value
+
+ def test_cannot_instantiate(self):
+ class Opt(uroboros.Option):
+ pass
+ with pytest.raises(TypeError):
+ Opt()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 3
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
flake8==5.0.4
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
mccabe==0.7.0
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pyflakes==2.5.0
pytest==7.1.2
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
-e git+https://github.com/pddg/uroboros.git@888c00e6f082510eb80de1ae4708cc2dd3d023a0#egg=uroboros
zipp @ file:///croot/zipp_1672387121353/work
| name: uroboros
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- flake8==5.0.4
- importlib-metadata==4.2.0
- mccabe==0.7.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
prefix: /opt/conda/envs/uroboros
| [
"tests/test_option.py::TestOption::test_no_before_validate",
"tests/test_option.py::TestOption::test_no_after_validate"
] | [] | [
"tests/test_option.py::TestOption::test_before_hook",
"tests/test_option.py::TestOption::test_after_hook",
"tests/test_option.py::TestOption::test_cannot_instantiate"
] | [] | Apache License 2.0 | 5,365 | 899 | [
"uroboros/command.py",
"uroboros/option.py",
"uroboros/utils.py"
] |
|
pytorch__ignite-621 | 10f18a224925caba95dbc119b38cb252a5a3481d | 2019-09-10 10:06:43 | 8c8c3c2e9a007673dca1db68e27f0bdff2edf643 | vfdev-5: @miguelvr thanks for the review ! | diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py
index b2d41e6b..9e18f420 100644
--- a/ignite/contrib/handlers/tqdm_logger.py
+++ b/ignite/contrib/handlers/tqdm_logger.py
@@ -1,5 +1,4 @@
# -*- coding: utf-8 -*-
-import numbers
import warnings
import torch
@@ -53,7 +52,7 @@ class ProgressBar(BaseLogger):
pbar.attach(trainer, ['loss'])
# Progress bar will looks like
- # Epoch [2/50]: [64/128] 50%|█████ , loss=12.34e-02 [06:17<12:34]
+ # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
Directly attach the engine's output
@@ -65,7 +64,7 @@ class ProgressBar(BaseLogger):
pbar.attach(trainer, output_transform=lambda x: {'loss': x})
# Progress bar will looks like
- # Epoch [2/50]: [64/128] 50%|█████ , loss=12.34e-02 [06:17<12:34]
+ # Epoch [2/50]: [64/128] 50%|█████ , loss=0.123 [06:17<12:34]
Note:
When adding attaching the progress bar to an engine, it is recommend that you replace
@@ -91,6 +90,7 @@ class ProgressBar(BaseLogger):
def __init__(self, persist=False,
bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]',
+
**tqdm_kwargs):
try:
@@ -226,18 +226,18 @@ class _OutputHandler(BaseOutputHandler):
rendered_metrics = {}
for key, value in metrics.items():
- if isinstance(value, numbers.Number) or \
- isinstance(value, torch.Tensor) and value.ndimension() == 0:
- rendered_metrics[key] = "{:.2e}".format(value)
- elif isinstance(value, torch.Tensor) and value.ndimension() == 1:
- for i, v in enumerate(value):
- k = "{}_{}".format(key, i)
- rendered_metrics[k] = "{:.2e}".format(v)
- elif isinstance(value, str):
- rendered_metrics[key] = value
+ if isinstance(value, torch.Tensor):
+ if value.ndimension() == 0:
+ rendered_metrics[key] = value.item()
+ elif value.ndimension() == 1:
+ for i, v in enumerate(value):
+ k = "{}_{}".format(key, i)
+ rendered_metrics[k] = v.item()
+ else:
+ warnings.warn("ProgressBar can not log "
+ "tensor with {} dimensions".format(value.ndimension()))
else:
- warnings.warn("ProgressBar can not log "
- "metrics value type {}".format(type(value)))
+ rendered_metrics[key] = value
if rendered_metrics:
logger.pbar.set_postfix(**rendered_metrics)
| Improve TQDM, progress bar value formatting
Idea is to use tqdm's value renderer instead of using current implementation. Newer version will looks like:
```python
import tqdm
with tqdm.tqdm() as pbar:
pbar.set_postfix({'a': 123, 'b': 12.3456788, 'd': 0.12345, 'c': torch.tensor(123).item(), 'e': 'text', 'f': lambda x: x})
```
out:
```
0it [00:00, ?it/s, a=123, b=12.3, d=0.123, c=123, e=text, f=<function <lambda> at 0x1234>]
```
This will help to better display integer values as `123` instead of `1.23e+02`.
cc @miguelvr | pytorch/ignite | diff --git a/tests/ignite/contrib/handlers/test_tqdm_logger.py b/tests/ignite/contrib/handlers/test_tqdm_logger.py
index f5d8b0fa..b6fd4f1d 100644
--- a/tests/ignite/contrib/handlers/test_tqdm_logger.py
+++ b/tests/ignite/contrib/handlers/test_tqdm_logger.py
@@ -31,7 +31,7 @@ def test_pbar(capsys):
err = captured.err.split('\r')
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
- expected = u'Epoch [2/2]: [1/2] 50%|█████ , a=1.00e+00 [00:00<00:00]'
+ expected = u'Epoch [2/2]: [1/2] 50%|█████ , a=1 [00:00<00:00]'
assert err[-1] == expected
@@ -79,7 +79,7 @@ def test_pbar_with_metric(capsys):
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
actual = err[-1]
- expected = u'Epoch: [1/2] 50%|█████ , batchloss=5.00e-01 [00:00<00:00]'
+ expected = u'Epoch: [1/2] 50%|█████ , batchloss=0.5 [00:00<00:00]'
assert actual == expected
@@ -110,7 +110,7 @@ def test_pbar_with_all_metric(capsys):
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
actual = err[-1]
- expected = u'Epoch: [1/2] 50%|█████ , another batchloss=1.50e+00, batchloss=5.00e-01 [00:00<00:00]'
+ expected = u'Epoch: [1/2] 50%|█████ , another batchloss=1.5, batchloss=0.5 [00:00<00:00]'
assert actual == expected
@@ -148,7 +148,7 @@ def test_pbar_with_output(capsys):
err = captured.err.split('\r')
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
- expected = u'Epoch [2/2]: [1/2] 50%|█████ , a=1.00e+00 [00:00<00:00]'
+ expected = u'Epoch [2/2]: [1/2] 50%|█████ , a=1 [00:00<00:00]'
assert err[-1] == expected
@@ -174,7 +174,7 @@ def test_pbar_with_scalar_output(capsys):
err = captured.err.split('\r')
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
- expected = u'Epoch [2/2]: [1/2] 50%|█████ , output=1.00e+00 [00:00<00:00]'
+ expected = u'Epoch [2/2]: [1/2] 50%|█████ , output=1 [00:00<00:00]'
assert err[-1] == expected
@@ -209,7 +209,7 @@ def test_pbar_with_tqdm_kwargs(capsys):
err = captured.err.split('\r')
err = list(map(lambda x: x.strip(), err))
err = list(filter(None, err))
- expected = u'My description: [10/10]: [4/5] 80%|████████ , output=1.00e+00 [00:00<00:00]'.format()
+ expected = u'My description: [10/10]: [4/5] 80%|████████ , output=1 [00:00<00:00]'.format()
assert err[-1] == expected
@@ -230,30 +230,36 @@ def test_pbar_for_validation(capsys):
def test_pbar_output_tensor(capsys):
- loader = [1, 2, 3, 4, 5]
- def update_fn(engine, batch):
- return torch.Tensor([batch, 0])
+ def _test(out_tensor, out_msg):
+ loader = [1, 2, 3, 4, 5]
- engine = Engine(update_fn)
+ def update_fn(engine, batch):
+ return out_tensor
- pbar = ProgressBar(desc="Output tensor")
- pbar.attach(engine, output_transform=lambda x: x)
- engine.run(loader, max_epochs=1)
+ engine = Engine(update_fn)
- captured = capsys.readouterr()
- err = captured.err.split('\r')
- err = list(map(lambda x: x.strip(), err))
- err = list(filter(None, err))
- expected = u'Output tensor: [4/5] 80%|████████ , output_0=5.00e+00, output_1=0.00e+00 [00:00<00:00]'
- assert err[-1] == expected
+ pbar = ProgressBar(desc="Output tensor")
+ pbar.attach(engine, output_transform=lambda x: x)
+ engine.run(loader, max_epochs=1)
+
+ captured = capsys.readouterr()
+ err = captured.err.split('\r')
+ err = list(map(lambda x: x.strip(), err))
+ err = list(filter(None, err))
+ expected = u'Output tensor: [4/5] 80%|████████ , {} [00:00<00:00]'.format(out_msg)
+ assert err[-1] == expected
+
+ _test(out_tensor=torch.tensor([5, 0]), out_msg="output_0=5, output_1=0")
+ _test(out_tensor=torch.tensor(123), out_msg="output=123")
+ _test(out_tensor=torch.tensor(1.234), out_msg="output=1.23")
def test_pbar_output_warning(capsys):
loader = [1, 2, 3, 4, 5]
def update_fn(engine, batch):
- return batch, "abc"
+ return torch.zeros(1, 2, 3, 4)
engine = Engine(update_fn)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"numpy",
"mock",
"pytest",
"codecov",
"pytest-cov",
"matplotlib",
"pandas",
"gym",
"tqdm",
"scikit-learn",
"tensorboardX",
"visdom",
"polyaxon-client",
"mlflow"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alembic==1.7.7
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
charset-normalizer==2.0.12
click==7.1.2
cloudpickle==2.2.1
codecov==2.1.13
coverage==6.2
cycler==0.11.0
databricks-cli==0.17.8
dataclasses==0.8
decorator==4.4.2
docker==5.0.3
entrypoints==0.4
Flask==1.1.4
gitdb==4.0.9
GitPython==3.1.18
greenlet==2.0.2
gunicorn==21.2.0
gym==0.26.2
gym-notices==0.0.8
hestia==0.6.0
idna==3.10
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
importlib-resources==5.4.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
itsdangerous==1.1.0
Jinja2==2.10.3
joblib==1.1.1
jsonpatch==1.32
jsonpointer==2.3
kiwisolver==1.3.1
Mako==1.1.6
MarkupSafe==2.0.1
marshmallow==3.0.0rc5
matplotlib==3.3.4
mlflow==1.23.1
mock==5.2.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
networkx==2.5.1
numpy==1.19.5
oauthlib==3.2.2
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pandas==1.1.5
Pillow==8.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
polyaxon-client==0.6.1
polyaxon-schemas==0.6.1
polystores==0.2.5
prometheus-client==0.17.1
prometheus_flask_exporter==0.23.2
protobuf==4.21.0
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
PyJWT==2.4.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
-e git+https://github.com/pytorch/ignite.git@10f18a224925caba95dbc119b38cb252a5a3481d#egg=pytorch_ignite
pytz==2025.2
PyYAML==6.0.1
querystring-parser==1.2.4
requests==2.27.1
requests-toolbelt==1.0.0
rhea==0.5.5
scikit-learn==0.24.2
scipy==1.5.4
six==1.17.0
smmap==5.0.0
SQLAlchemy==1.4.54
sqlparse==0.4.4
tabulate==0.8.10
tensorboardX==2.6.2.2
threadpoolctl==3.1.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==1.2.3
torch==1.10.2
tornado==6.1
tqdm==4.64.1
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
visdom==0.2.4
websocket-client==1.3.1
Werkzeug==1.0.1
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: ignite
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alembic==1.7.7
- charset-normalizer==2.0.12
- click==7.1.2
- cloudpickle==2.2.1
- codecov==2.1.13
- coverage==6.2
- cycler==0.11.0
- databricks-cli==0.17.8
- dataclasses==0.8
- decorator==4.4.2
- docker==5.0.3
- entrypoints==0.4
- flask==1.1.4
- gitdb==4.0.9
- gitpython==3.1.18
- greenlet==2.0.2
- gunicorn==21.2.0
- gym==0.26.2
- gym-notices==0.0.8
- hestia==0.6.0
- idna==3.10
- importlib-resources==5.4.0
- itsdangerous==1.1.0
- jinja2==2.10.3
- joblib==1.1.1
- jsonpatch==1.32
- jsonpointer==2.3
- kiwisolver==1.3.1
- mako==1.1.6
- markupsafe==2.0.1
- marshmallow==3.0.0rc5
- matplotlib==3.3.4
- mlflow==1.23.1
- mock==5.2.0
- networkx==2.5.1
- numpy==1.19.5
- oauthlib==3.2.2
- pandas==1.1.5
- pillow==8.4.0
- polyaxon-client==0.6.1
- polyaxon-schemas==0.6.1
- polystores==0.2.5
- prometheus-client==0.17.1
- prometheus-flask-exporter==0.23.2
- protobuf==4.21.0
- psutil==7.0.0
- pyjwt==2.4.0
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- querystring-parser==1.2.4
- requests==2.27.1
- requests-toolbelt==1.0.0
- rhea==0.5.5
- scikit-learn==0.24.2
- scipy==1.5.4
- six==1.17.0
- smmap==5.0.0
- sqlalchemy==1.4.54
- sqlparse==0.4.4
- tabulate==0.8.10
- tensorboardx==2.6.2.2
- threadpoolctl==3.1.0
- tomli==1.2.3
- torch==1.10.2
- tornado==6.1
- tqdm==4.64.1
- urllib3==1.26.20
- visdom==0.2.4
- websocket-client==1.3.1
- werkzeug==1.0.1
prefix: /opt/conda/envs/ignite
| [
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_metric",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_all_metric",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_output",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_scalar_output",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_tqdm_kwargs",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_output_tensor"
] | [] | [
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_log_message",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_attach_fail_with_string",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_no_metric_names",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_fail_with_non_callable_transform",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_str_output",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_for_validation",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_output_warning",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_on_epochs",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_wrong_events_order",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_on_custom_events",
"tests/ignite/contrib/handlers/test_tqdm_logger.py::test_pbar_with_nan_input"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,366 | 803 | [
"ignite/contrib/handlers/tqdm_logger.py"
] |
geopandas__geopandas-1124 | b71ab0bf3a0a88151a827189dba5c97284460b5d | 2019-09-10 16:06:45 | 6be674d841f246471248c12ee29cb72f9504e3db | ian-r-rose: Hrm, looks like the VSI check logic is not stable in all of the versions of fiona that are tested here. Any suggestions on how to proceed?
jorisvandenbossche: > Hrm, looks like the VSI check logic is not stable in all of the versions of fiona that are tested here. Any suggestions on how to proceed?
If it doesn't break other functionality (so it the older version of fiona only prevents this issue being fixed), it is fine to skip the tests for older versions of fiona. Then then new functionality is only available with a recent fiona.
codecov[bot]: # [Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=h1) Report
> Merging [#1124](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=desc) into [master](https://codecov.io/gh/geopandas/geopandas/commit/29add0a735b00dc20c79e0fccc8e6a775c4997b0?src=pr&el=desc) will **decrease** coverage by `0.79%`.
> The diff coverage is `71.42%`.
[](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1124 +/- ##
=========================================
- Coverage 89.72% 88.92% -0.8%
=========================================
Files 20 20
Lines 1936 1942 +6
=========================================
- Hits 1737 1727 -10
- Misses 199 215 +16
```
| [Impacted Files](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [geopandas/io/file.py](https://codecov.io/gh/geopandas/geopandas/pull/1124/diff?src=pr&el=tree#diff-Z2VvcGFuZGFzL2lvL2ZpbGUucHk=) | `83.16% <71.42%> (-11.57%)` | :arrow_down: |
| [geopandas/tools/\_show\_versions.py](https://codecov.io/gh/geopandas/geopandas/pull/1124/diff?src=pr&el=tree#diff-Z2VvcGFuZGFzL3Rvb2xzL19zaG93X3ZlcnNpb25zLnB5) | `85.71% <0%> (-5.2%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=footer). Last update [29add0a...f84adc2](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
codecov[bot]: # [Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=h1) Report
> Merging [#1124](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=desc) into [master](https://codecov.io/gh/geopandas/geopandas/commit/29add0a735b00dc20c79e0fccc8e6a775c4997b0?src=pr&el=desc) will **decrease** coverage by `0.79%`.
> The diff coverage is `71.42%`.
[](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1124 +/- ##
=========================================
- Coverage 89.72% 88.92% -0.8%
=========================================
Files 20 20
Lines 1936 1942 +6
=========================================
- Hits 1737 1727 -10
- Misses 199 215 +16
```
| [Impacted Files](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [geopandas/io/file.py](https://codecov.io/gh/geopandas/geopandas/pull/1124/diff?src=pr&el=tree#diff-Z2VvcGFuZGFzL2lvL2ZpbGUucHk=) | `83.16% <71.42%> (-11.57%)` | :arrow_down: |
| [geopandas/tools/\_show\_versions.py](https://codecov.io/gh/geopandas/geopandas/pull/1124/diff?src=pr&el=tree#diff-Z2VvcGFuZGFzL3Rvb2xzL19zaG93X3ZlcnNpb25zLnB5) | `85.71% <0%> (-5.2%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=footer). Last update [29add0a...f84adc2](https://codecov.io/gh/geopandas/geopandas/pull/1124?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/geopandas/io/file.py b/geopandas/io/file.py
index 3c95102e..50f7f50c 100644
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -1,5 +1,4 @@
from distutils.version import LooseVersion
-import os
import numpy as np
import six
@@ -120,10 +119,10 @@ def to_file(df, filename, driver="ESRI Shapefile", schema=None, **kwargs):
The *kwargs* are passed to fiona.open and can be used to write
to multi-layer data, store data within archives (zip files), etc.
+ The path may specify a fiona VSI scheme.
"""
if schema is None:
schema = infer_schema(df)
- filename = os.path.abspath(os.path.expanduser(filename))
with fiona_env():
with fiona.open(
filename, "w", driver=driver, crs=df.crs, schema=schema, **kwargs
diff --git a/geopandas/plotting.py b/geopandas/plotting.py
index 1d7b194f..476085cf 100644
--- a/geopandas/plotting.py
+++ b/geopandas/plotting.py
@@ -115,7 +115,8 @@ def plot_polygon_collection(
if values is not None:
collection.set_array(np.asarray(values))
collection.set_cmap(cmap)
- collection.set_clim(vmin, vmax)
+ if "norm" not in kwargs:
+ collection.set_clim(vmin, vmax)
ax.add_collection(collection, autolim=True)
ax.autoscale_view()
@@ -173,7 +174,8 @@ def plot_linestring_collection(
if values is not None:
collection.set_array(np.asarray(values))
collection.set_cmap(cmap)
- collection.set_clim(vmin, vmax)
+ if "norm" not in kwargs:
+ collection.set_clim(vmin, vmax)
ax.add_collection(collection, autolim=True)
ax.autoscale_view()
@@ -233,9 +235,12 @@ def plot_point_collection(
if pd.api.types.is_list_like(color):
color = np.take(color, multiindex)
- collection = ax.scatter(
- x, y, color=color, vmin=vmin, vmax=vmax, cmap=cmap, marker=marker, **kwargs
- )
+ if "norm" not in kwargs:
+ collection = ax.scatter(
+ x, y, color=color, vmin=vmin, vmax=vmax, cmap=cmap, marker=marker, **kwargs
+ )
+ else:
+ collection = ax.scatter(x, y, color=color, cmap=cmap, marker=marker, **kwargs)
return collection
| Writing to file doesn't respect fiona vsi schemes
Fiona supports writing to different OGR virtual filesystem schemes via the file path, e.g., `zip://`, `s3://`, etc. However, these schemes get overwritten in `gpd.to_file()`, which attempts to take the path and construct an absolute path:
https://github.com/geopandas/geopandas/blob/29add0a735b00dc20c79e0fccc8e6a775c4997b0/geopandas/io/file.py#L127
so you get bad paths like `/home/username/s3:/bucket/object`.
Is this absolute file path important? Is there a reason to not allow VSI-based paths in `gpd.to_file()`? Can we check for a url-like scheme before trying to abs-path it? If so, I'd be happy to submit a PR.
(N.b., the corresponding `gpd.read_file()` handles these URLs correctly). | geopandas/geopandas | diff --git a/geopandas/tests/test_plotting.py b/geopandas/tests/test_plotting.py
index 223a486b..ecb2d810 100644
--- a/geopandas/tests/test_plotting.py
+++ b/geopandas/tests/test_plotting.py
@@ -176,6 +176,19 @@ class TestPointPlotting:
# last point == top of colorbar
np.testing.assert_array_equal(point_colors[-1], cbar_colors[-1])
+ def test_subplots_norm(self):
+ # colors of subplots are the same as for plot (norm is applied)
+ cmap = matplotlib.cm.viridis_r
+ norm = matplotlib.colors.Normalize(vmin=0, vmax=20)
+ ax = self.df.plot(column="values", cmap=cmap, norm=norm)
+ actual_colors_orig = ax.collections[0].get_facecolors()
+ exp_colors = cmap(np.arange(10) / (20))
+ np.testing.assert_array_equal(exp_colors, actual_colors_orig)
+ fig, ax = plt.subplots()
+ self.df[1:].plot(column="values", ax=ax, norm=norm, cmap=cmap)
+ actual_colors_sub = ax.collections[0].get_facecolors()
+ np.testing.assert_array_equal(actual_colors_orig[1], actual_colors_sub[0])
+
def test_empty_plot(self):
s = GeoSeries([])
with pytest.warns(UserWarning):
@@ -264,6 +277,19 @@ class TestLineStringPlotting:
assert ls[0] == exp_ls[0]
assert ls[1] == exp_ls[1]
+ def test_subplots_norm(self):
+ # colors of subplots are the same as for plot (norm is applied)
+ cmap = matplotlib.cm.viridis_r
+ norm = matplotlib.colors.Normalize(vmin=0, vmax=20)
+ ax = self.df.plot(column="values", cmap=cmap, norm=norm)
+ actual_colors_orig = ax.collections[0].get_edgecolors()
+ exp_colors = cmap(np.arange(10) / (20))
+ np.testing.assert_array_equal(exp_colors, actual_colors_orig)
+ fig, ax = plt.subplots()
+ self.df[1:].plot(column="values", ax=ax, norm=norm, cmap=cmap)
+ actual_colors_sub = ax.collections[0].get_edgecolors()
+ np.testing.assert_array_equal(actual_colors_orig[1], actual_colors_sub[0])
+
def test_multilinestrings(self):
# MultiLineStrings
@@ -411,6 +437,19 @@ class TestPolygonPlotting:
# colors are repeated for all components within a MultiPolygon
_check_colors(4, ax.collections[0].get_facecolors(), ["r", "r", "b", "b"])
+ def test_subplots_norm(self):
+ # colors of subplots are the same as for plot (norm is applied)
+ cmap = matplotlib.cm.viridis_r
+ norm = matplotlib.colors.Normalize(vmin=0, vmax=10)
+ ax = self.df.plot(column="values", cmap=cmap, norm=norm)
+ actual_colors_orig = ax.collections[0].get_facecolors()
+ exp_colors = cmap(np.arange(2) / (10))
+ np.testing.assert_array_equal(exp_colors, actual_colors_orig)
+ fig, ax = plt.subplots()
+ self.df[1:].plot(column="values", ax=ax, norm=norm, cmap=cmap)
+ actual_colors_sub = ax.collections[0].get_facecolors()
+ np.testing.assert_array_equal(actual_colors_orig[1], actual_colors_sub[0])
+
class TestPolygonZPlotting:
def setup_method(self):
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 2
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"matplotlib",
"descartes",
"mapclassify",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==25.3.0
certifi==2025.1.31
click==8.1.8
click-plugins==1.1.1
cligj==0.7.2
contourpy==1.3.0
cycler==0.12.1
Cython==3.0.12
descartes==1.1.0
exceptiongroup==1.2.2
fiona==1.10.1
fonttools==4.56.0
-e git+https://github.com/geopandas/geopandas.git@b71ab0bf3a0a88151a827189dba5c97284460b5d#egg=geopandas
importlib_metadata==8.6.1
importlib_resources==6.5.2
iniconfig==2.1.0
joblib==1.4.2
kiwisolver==1.4.7
mapclassify==2.8.1
matplotlib==3.9.4
networkx==3.2.1
numpy==2.0.2
packaging==24.2
pandas==2.2.3
pillow==11.1.0
pluggy==1.5.0
pyparsing==3.2.3
pyproj==3.6.1
pytest==8.3.5
python-dateutil==2.9.0.post0
pytz==2025.2
scikit-learn==1.6.1
scipy==1.13.1
shapely==2.0.7
six==1.17.0
threadpoolctl==3.6.0
tomli==2.2.1
tzdata==2025.2
zipp==3.21.0
| name: geopandas
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==25.3.0
- certifi==2025.1.31
- click==8.1.8
- click-plugins==1.1.1
- cligj==0.7.2
- contourpy==1.3.0
- cycler==0.12.1
- cython==3.0.12
- descartes==1.1.0
- exceptiongroup==1.2.2
- fiona==1.10.1
- fonttools==4.56.0
- importlib-metadata==8.6.1
- importlib-resources==6.5.2
- iniconfig==2.1.0
- joblib==1.4.2
- kiwisolver==1.4.7
- mapclassify==2.8.1
- matplotlib==3.9.4
- networkx==3.2.1
- numpy==2.0.2
- packaging==24.2
- pandas==2.2.3
- pillow==11.1.0
- pluggy==1.5.0
- pyparsing==3.2.3
- pyproj==3.6.1
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- pytz==2025.2
- scikit-learn==1.6.1
- scipy==1.13.1
- shapely==2.0.7
- six==1.17.0
- threadpoolctl==3.6.0
- tomli==2.2.1
- tzdata==2025.2
- zipp==3.21.0
prefix: /opt/conda/envs/geopandas
| [
"geopandas/tests/test_plotting.py::TestPointPlotting::test_subplots_norm"
] | [
"geopandas/tests/test_plotting.py::TestPointPlotting::test_legend",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_multipoints",
"geopandas/tests/test_plotting.py::TestLineStringPlotting::test_single_color",
"geopandas/tests/test_plotting.py::TestLineStringPlotting::test_style_kwargs",
"geopandas/tests/test_plotting.py::TestLineStringPlotting::test_subplots_norm",
"geopandas/tests/test_plotting.py::TestLineStringPlotting::test_multilinestrings",
"geopandas/tests/test_plotting.py::TestPolygonZPlotting::test_plot",
"geopandas/tests/test_plotting.py::TestNonuniformGeometryPlotting::test_colors",
"geopandas/tests/test_plotting.py::TestNonuniformGeometryPlotting::test_style_kwargs",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_legend",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_negative_legend",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_scheme_name_compat[FISHER_JENKS]",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_scheme_name_compat[FISHERJENKS]",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_classification_kwds",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_cax_legend_height",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_linestrings",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_linestrings_values",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_polygons",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_polygons_values",
"geopandas/tests/test_plotting.py::test_column_values"
] | [
"geopandas/tests/test_plotting.py::TestPointPlotting::test_figsize",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_default_colors",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_colormap",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_single_color",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_markersize",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_style_kwargs",
"geopandas/tests/test_plotting.py::TestPointPlotting::test_empty_plot",
"geopandas/tests/test_plotting.py::TestPointZPlotting::test_plot",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_invalid_scheme",
"geopandas/tests/test_plotting.py::TestMapclassifyPlotting::test_cax_legend_passing",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_points",
"geopandas/tests/test_plotting.py::TestPlotCollections::test_points_values"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,370 | 652 | [
"geopandas/io/file.py",
"geopandas/plotting.py"
] |
XKNX__xknx-234 | 4b71c28db4cde694695da3dfad0072dd5383717f | 2019-09-10 16:14:51 | 58aca2024624d92115594ea1751c93cf30c0296b | coveralls:
[](https://coveralls.io/builds/25633672)
Coverage decreased (-0.02%) to 91.82% when pulling **46db4d6f0f437ccf24cc15778b91e6c634d6f180 on elupus:fix_exceptions** into **4b71c28db4cde694695da3dfad0072dd5383717f on XKNX:master**.
| diff --git a/examples/example_light_rgbw.py b/examples/example_light_rgbw.py
new file mode 100644
index 00000000..dd060c98
--- /dev/null
+++ b/examples/example_light_rgbw.py
@@ -0,0 +1,49 @@
+"""Example for setting different colors on a RGBW light."""
+import asyncio
+
+from xknx import XKNX
+from xknx.devices import RemoteValueColorRGBW
+
+
+async def main():
+ """Connect to KNX/IP bus and set different colors."""
+ xknx = XKNX()
+ await xknx.start()
+
+ rgbw = RemoteValueColorRGBW(xknx,
+ group_address='1/1/40',
+ group_address_state='1/1/41',
+ device_name="RGBWLight")
+
+ await rgbw.set([255, 255, 255, 0, 15]) # cold-white
+ await asyncio.sleep(1)
+ await rgbw.set([0, 0, 0, 255, 15]) # warm-white
+ await asyncio.sleep(1)
+ await rgbw.set([0, 0, 0, 0, 15]) # off
+ await asyncio.sleep(1)
+
+ await rgbw.set([255, 0, 0, 0]) # red
+ await asyncio.sleep(1)
+ await rgbw.set([0, 255, 0, 0]) # green
+ await asyncio.sleep(1)
+ await rgbw.set([0, 0, 255, 0]) # blue
+ await asyncio.sleep(1)
+ await rgbw.set([0, 0, 0, 0, 15]) # off
+ await asyncio.sleep(1)
+
+ await rgbw.set([255, 255, 0, 0, 15])
+ await asyncio.sleep(1)
+ await rgbw.set([0, 255, 255, 0, 15])
+ await asyncio.sleep(1)
+ await rgbw.set([255, 0, 255, 0, 15])
+ await asyncio.sleep(1)
+ await rgbw.set([0, 0, 0, 0, 15]) # off
+ await asyncio.sleep(1)
+
+ await xknx.stop()
+
+
+# pylint: disable=invalid-name
+loop = asyncio.get_event_loop()
+loop.run_until_complete(main())
+loop.close()
diff --git a/xknx/devices/remote_value_color_rgbw.py b/xknx/devices/remote_value_color_rgbw.py
index 6c2e436e..beea0ca9 100644
--- a/xknx/devices/remote_value_color_rgbw.py
+++ b/xknx/devices/remote_value_color_rgbw.py
@@ -35,19 +35,23 @@ class RemoteValueColorRGBW(RemoteValue):
Convert value (4-6 bytes) to payload (6 bytes).
* Structure of DPT 251.600
- ** Bytes 0, 1:
- *** Bit 0-11: 0
- *** Bit 12,13,14,15: R,G,B,W value valid?
- ** Byte 2: R value
- ** Byte 3: G value
- ** Byte 4: B value
- ** Byte 5: W value
+ ** Byte 0: R value
+ ** Byte 1: G value
+ ** Byte 2: B value
+ ** Byte 3: W value
+ ** Byte 4: 0x00 (reserved)
+ ** Byte 5:
+ *** Bit 0: W value valid?
+ *** Bit 1: B value valid?
+ *** Bit 2: G value valid?
+ *** Bit 3: R value valid?
+ *** Bit 4-7: 0
In case we receive
* > 6 bytes: error
* 6 bytes: all bytes are passed through
- * 5 bytes: 0x00 left padding
- * 4 bytes: 0x000f left padding
+ * 5 bytes: 0x00?? fill up to 6 bytes
+ * 4 bytes: 0x000f right padding to 6 bytes
* < 4 bytes: error
"""
if not isinstance(value, (list, tuple)):
@@ -56,12 +60,16 @@ class RemoteValueColorRGBW(RemoteValue):
if len(value) < 4 or len(value) > 6:
raise ConversionError("Cannot serialize value to DPT 251.600 (wrong length, expecting list of 4-6 bytes)",
value=value, type=type(value))
- rgbw = value[len(value)-4:]
+ rgbw = value[:4]
if any(not isinstance(color, int) for color in rgbw) \
or any(color < 0 for color in rgbw) \
or any(color > 255 for color in rgbw):
raise ConversionError("Cannot serialize DPT 251.600 (wrong RGBW values)", value=value)
- return DPTArray([0x00, 0x0f][:6-len(value)] + list(value))
+ if len(value) < 5:
+ return DPTArray(list(rgbw) + [0x00, 0x0f])
+ if len(value) < 6:
+ return DPTArray(list(rgbw) + [0x00] + list(value[4:]))
+ return DPTArray(value)
def from_knx(self, payload):
"""
@@ -72,7 +80,7 @@ class RemoteValueColorRGBW(RemoteValue):
"""
result = []
for i in range(0, len(payload.value) - 2):
- valid = payload.value[1] & (0x08 >> i) != 0
- result.append(payload.value[2 + i] if valid else self.previous_value[i])
+ valid = (payload.value[5] & (0x08 >> i)) != 0 # R,G,B,W value valid?
+ result.append(payload.value[i] if valid else self.previous_value[i])
self.previous_value = result
return result
diff --git a/xknx/exceptions/exception.py b/xknx/exceptions/exception.py
index 879e14b2..626299bd 100644
--- a/xknx/exceptions/exception.py
+++ b/xknx/exceptions/exception.py
@@ -6,7 +6,15 @@ class XKNXException(Exception):
def __eq__(self, other):
"""Equal operator."""
- return self.__dict__ == other.__dict__
+ return repr(self) == repr(other)
+
+ def __hash__(self):
+ """Hash function."""
+ return hash(str(self))
+
+ def __repr__(self):
+ """Representation of object."""
+ return str(self)
class CouldNotParseTelegram(XKNXException):
@@ -14,7 +22,7 @@ class CouldNotParseTelegram(XKNXException):
def __init__(self, description, **kwargs):
"""Initialize CouldNotParseTelegram class."""
- super().__init__("Could not parse Telegram")
+ super().__init__()
self.description = description
self.parameter = kwargs
@@ -32,7 +40,7 @@ class CouldNotParseKNXIP(XKNXException):
def __init__(self, description=""):
"""Initialize CouldNotParseTelegram class."""
- super().__init__("Could not parse KNXIP")
+ super().__init__()
self.description = description
def __str__(self):
@@ -46,7 +54,7 @@ class ConversionError(XKNXException):
def __init__(self, description, **kwargs):
"""Initialize ConversionError class."""
- super().__init__("Conversion Error")
+ super().__init__()
self.description = description
self.parameter = kwargs
@@ -63,7 +71,7 @@ class CouldNotParseAddress(XKNXException):
def __init__(self, address=None):
"""Initialize CouldNotParseAddress class."""
- super().__init__("Could not parse address")
+ super().__init__()
self.address = address
def __str__(self):
@@ -76,7 +84,7 @@ class DeviceIllegalValue(XKNXException):
def __init__(self, value, description):
"""Initialize DeviceIllegalValue class."""
- super().__init__("Illegal value for device")
+ super().__init__()
self.value = value
self.description = description
| XKNXException is an unhashable exception
**Description of problem:**
Seem the XKNXException exception is not hashable causing problems with python traceback module: https://bugs.python.org/issue28603. Oddly that aught to have been solved already.
- [ ] using xknx standalone
- [X] using Home-Assistant knx plugin
**Version information:**
- xknx / Home-Assistant release with the issue:
- last working xknx / Home-Assistant release (if known): 0.98
**KNX installation:**
<!--
Please provide details about your installation.
- Manufacturer and model of relevant actors, sensors or interfaces.
- if you have access to ETS:
- provide relevant group address parameters (DPT, Flags)
- if applicable: excerpt of bus monitor output
-->
**Problem-relevant `xknx.yaml` or `configuration.yaml` entries (fill out even if it seems unimportant):**
**Traceback (if applicable):**
Aug 22 08:18:24 hass hass[35975]: TypeError: unhashable type: 'XKNXException'
Aug 22 08:18:24 hass hass[35975]: _seen.add(exc_value)
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/traceback.py", line 462, in __init__
Aug 22 08:18:24 hass hass[35975]: type(value), value, tb, limit=limit).format(chain=chain):
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/traceback.py", line 100, in print_exception
Aug 22 08:18:24 hass hass[35975]: traceback.print_exception(ei[0], ei[1], tb, None, sio)
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/logging/__init__.py", line 533, in formatException
Aug 22 08:18:24 hass hass[35975]: record.exc_text = self.formatException(record.exc_info)
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/logging/__init__.py", line 583, in format
Aug 22 08:18:24 hass hass[35975]: return fmt.format(record)
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/logging/__init__.py", line 838, in format
Aug 22 08:18:24 hass hass[35975]: msg = self.format(record)
Aug 22 08:18:24 hass hass[35975]: File "/usr/lib/python3.6/logging/__init__.py", line 992, in emit | XKNX/xknx | diff --git a/test/core_tests/exceptions_test.py b/test/core_tests/exceptions_test.py
new file mode 100644
index 00000000..41fc1b74
--- /dev/null
+++ b/test/core_tests/exceptions_test.py
@@ -0,0 +1,51 @@
+"""Unit tests for exceptions"""
+import pytest
+
+from xknx.exceptions import (
+ ConversionError, CouldNotParseAddress, CouldNotParseKNXIP,
+ CouldNotParseTelegram, DeviceIllegalValue, XKNXException)
+
+
[email protected](
+ "base,equal,diff",
+ [
+ (
+ ConversionError("desc1"),
+ ConversionError("desc1"),
+ ConversionError("desc2"),
+ ),
+ (
+ CouldNotParseAddress(123),
+ CouldNotParseAddress(123),
+ CouldNotParseAddress(321),
+ ),
+ (
+ CouldNotParseKNXIP("desc1"),
+ CouldNotParseKNXIP("desc1"),
+ CouldNotParseKNXIP("desc2"),
+ ),
+ (
+ CouldNotParseTelegram("desc", arg1=1, arg2=2),
+ CouldNotParseTelegram("desc", arg1=1, arg2=2),
+ CouldNotParseTelegram("desc", arg1=2, arg2=1),
+ ),
+ (
+ DeviceIllegalValue("value1", "desc"),
+ DeviceIllegalValue("value1", "desc"),
+ DeviceIllegalValue("value1", "desc2"),
+ ),
+ (
+ XKNXException("desc1"),
+ XKNXException("desc1"),
+ XKNXException("desc2"),
+ ),
+ ],
+)
+def test_exceptions(base, equal, diff):
+ """Test hashability and repr of exceptions."""
+ assert hash(base) == hash(equal)
+ assert hash(base) != hash(diff)
+ assert base == equal
+ assert base != diff
+ assert repr(base) == repr(equal)
+ assert repr(base) != repr(diff)
diff --git a/test/devices_tests/light_test.py b/test/devices_tests/light_test.py
index b6283672..5de6eddf 100644
--- a/test/devices_tests/light_test.py
+++ b/test/devices_tests/light_test.py
@@ -318,7 +318,7 @@ class TestLight(unittest.TestCase):
self.assertEqual(xknx.telegrams.qsize(), 1)
telegram = xknx.telegrams.get_nowait()
self.assertEqual(telegram,
- Telegram(GroupAddress('1/2/5'), payload=DPTArray((0, 15, 23, 24, 25, 26))))
+ Telegram(GroupAddress('1/2/5'), payload=DPTArray((23, 24, 25, 26, 0, 15))))
self.assertEqual(light.current_color, ([23, 24, 25], 26))
def test_set_color_rgbw_not_possible(self):
@@ -489,7 +489,7 @@ class TestLight(unittest.TestCase):
group_address_color='1/2/4',
group_address_rgbw='1/2/5')
self.assertEqual(light.current_color, (None, None))
- telegram = Telegram(GroupAddress('1/2/5'), payload=DPTArray((0, 15, 23, 24, 25, 26)))
+ telegram = Telegram(GroupAddress('1/2/5'), payload=DPTArray((23, 24, 25, 26, 0, 15)))
self.loop.run_until_complete(asyncio.Task(light.process(telegram)))
self.assertEqual(light.current_color, ([23, 24, 25], 26))
diff --git a/test/devices_tests/remote_value_color_rgbw_test.py b/test/devices_tests/remote_value_color_rgbw_test.py
index 2a56d3b0..d352c299 100644
--- a/test/devices_tests/remote_value_color_rgbw_test.py
+++ b/test/devices_tests/remote_value_color_rgbw_test.py
@@ -26,32 +26,32 @@ class TestRemoteValueColorRGBW(unittest.TestCase):
remote_value = RemoteValueColorRGBW(xknx)
input_list = [100, 101, 102, 127]
input_tuple = (100, 101, 102, 127)
- expected = DPTArray((0x00, 0x0f, 0x64, 0x65, 0x66, 0x7f))
+ expected = DPTArray((0x64, 0x65, 0x66, 0x7f, 0x00, 0x0f))
self.assertEqual(remote_value.to_knx(input_tuple), expected)
self.assertEqual(remote_value.to_knx(input_list), expected)
- self.assertEqual(remote_value.to_knx((15,) + input_tuple), expected)
- self.assertEqual(remote_value.to_knx([15] + input_list), expected)
- self.assertEqual(remote_value.to_knx((0, 15) + input_tuple), expected)
- self.assertEqual(remote_value.to_knx([0, 15] + input_list), expected)
+ self.assertEqual(remote_value.to_knx(input_tuple + (15,)), expected)
+ self.assertEqual(remote_value.to_knx(input_list + [15]), expected)
+ self.assertEqual(remote_value.to_knx(input_tuple + (0, 15)), expected)
+ self.assertEqual(remote_value.to_knx(input_list + [0, 15]), expected)
def test_from_knx(self):
"""Test from_knx function with normal operation."""
xknx = XKNX(loop=self.loop)
remote_value = RemoteValueColorRGBW(xknx)
self.assertEqual(
- remote_value.from_knx(DPTArray((0x00, 0x00, 0x64, 0x65, 0x66, 0x7f))),
+ remote_value.from_knx(DPTArray((0x64, 0x65, 0x66, 0x7f, 0x00, 0x00))),
[0, 0, 0, 0])
self.assertEqual(
- remote_value.from_knx(DPTArray((0x00, 0x0f, 0x64, 0x65, 0x66, 0x7f))),
+ remote_value.from_knx(DPTArray((0x64, 0x65, 0x66, 0x7f, 0x00, 0x0f))),
[100, 101, 102, 127])
self.assertEqual(
- remote_value.from_knx(DPTArray((0x00, 0x00, 0x64, 0x65, 0x66, 0x7f))),
+ remote_value.from_knx(DPTArray((0x64, 0x65, 0x66, 0x7f, 0x00, 0x00))),
[100, 101, 102, 127])
self.assertEqual(
- remote_value.from_knx(DPTArray((0x00, 0x09, 0xff, 0x65, 0x66, 0xff))),
+ remote_value.from_knx(DPTArray((0xff, 0x65, 0x66, 0xff, 0x00, 0x09))),
[255, 101, 102, 255])
self.assertEqual(
- remote_value.from_knx(DPTArray((0x00, 0x01, 0x64, 0x65, 0x66, 0x7f))),
+ remote_value.from_knx(DPTArray((0x64, 0x65, 0x66, 0x7f, 0x00, 0x01))),
[255, 101, 102, 127])
def test_to_knx_error(self):
@@ -90,7 +90,7 @@ class TestRemoteValueColorRGBW(unittest.TestCase):
telegram,
Telegram(
GroupAddress('1/2/3'),
- payload=DPTArray((0x00, 0x0f, 0x64, 0x65, 0x66, 0x67))))
+ payload=DPTArray((0x64, 0x65, 0x66, 0x67, 0x00, 0x0f))))
self.loop.run_until_complete(asyncio.Task(remote_value.set((100, 101, 104, 105))))
self.assertEqual(xknx.telegrams.qsize(), 1)
telegram = xknx.telegrams.get_nowait()
@@ -98,7 +98,7 @@ class TestRemoteValueColorRGBW(unittest.TestCase):
telegram,
Telegram(
GroupAddress('1/2/3'),
- payload=DPTArray((0x00, 0x0f, 0x64, 0x65, 0x68, 0x69))))
+ payload=DPTArray((0x64, 0x65, 0x68, 0x69, 0x00, 0x0f))))
def test_process(self):
"""Test process telegram."""
@@ -108,7 +108,7 @@ class TestRemoteValueColorRGBW(unittest.TestCase):
group_address=GroupAddress("1/2/3"))
telegram = Telegram(
group_address=GroupAddress("1/2/3"),
- payload=DPTArray((0x00, 0x0f, 0x64, 0x65, 0x66, 0x67)))
+ payload=DPTArray((0x64, 0x65, 0x66, 0x67, 0x00, 0x0f)))
self.loop.run_until_complete(asyncio.Task(remote_value.process(telegram)))
self.assertEqual(remote_value.value, [100, 101, 102, 103])
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 3
},
"num_modified_files": 2
} | 0.11 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-timeout",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements/production.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
coverage==7.2.7
exceptiongroup==1.2.2
execnet==2.0.2
importlib-metadata==6.7.0
iniconfig==2.0.0
netifaces==0.10.9
packaging==24.0
pluggy==1.2.0
pytest==7.4.4
pytest-asyncio==0.21.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
PyYAML==6.0.1
tomli==2.0.1
typing_extensions==4.7.1
-e git+https://github.com/XKNX/xknx.git@4b71c28db4cde694695da3dfad0072dd5383717f#egg=xknx
zipp==3.15.0
| name: xknx
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.2.7
- exceptiongroup==1.2.2
- execnet==2.0.2
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- netifaces==0.10.9
- packaging==24.0
- pluggy==1.2.0
- pytest==7.4.4
- pytest-asyncio==0.21.2
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- pyyaml==6.0.1
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/xknx
| [
"test/core_tests/exceptions_test.py::test_exceptions[base0-equal0-diff0]",
"test/core_tests/exceptions_test.py::test_exceptions[base1-equal1-diff1]",
"test/core_tests/exceptions_test.py::test_exceptions[base2-equal2-diff2]",
"test/core_tests/exceptions_test.py::test_exceptions[base3-equal3-diff3]",
"test/core_tests/exceptions_test.py::test_exceptions[base4-equal4-diff4]",
"test/core_tests/exceptions_test.py::test_exceptions[base5-equal5-diff5]",
"test/devices_tests/light_test.py::TestLight::test_process_color_rgbw",
"test/devices_tests/light_test.py::TestLight::test_set_color_rgbw",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_from_knx",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_process",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_set",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_to_knx"
] | [] | [
"test/devices_tests/light_test.py::TestLight::test_do",
"test/devices_tests/light_test.py::TestLight::test_has_group_address",
"test/devices_tests/light_test.py::TestLight::test_process_color",
"test/devices_tests/light_test.py::TestLight::test_process_color_temperature",
"test/devices_tests/light_test.py::TestLight::test_process_color_temperature_payload_invalid_length",
"test/devices_tests/light_test.py::TestLight::test_process_color_temperature_wrong_payload",
"test/devices_tests/light_test.py::TestLight::test_process_dimm",
"test/devices_tests/light_test.py::TestLight::test_process_dimm_payload_invalid_length",
"test/devices_tests/light_test.py::TestLight::test_process_dimm_wrong_payload",
"test/devices_tests/light_test.py::TestLight::test_process_switch",
"test/devices_tests/light_test.py::TestLight::test_process_switch_callback",
"test/devices_tests/light_test.py::TestLight::test_process_tunable_white",
"test/devices_tests/light_test.py::TestLight::test_process_tunable_white_payload_invalid_length",
"test/devices_tests/light_test.py::TestLight::test_process_tunable_white_wrong_payload",
"test/devices_tests/light_test.py::TestLight::test_set_brightness",
"test/devices_tests/light_test.py::TestLight::test_set_brightness_not_dimmable",
"test/devices_tests/light_test.py::TestLight::test_set_color",
"test/devices_tests/light_test.py::TestLight::test_set_color_not_possible",
"test/devices_tests/light_test.py::TestLight::test_set_color_rgbw_not_possible",
"test/devices_tests/light_test.py::TestLight::test_set_color_temp",
"test/devices_tests/light_test.py::TestLight::test_set_color_temp_unsupported",
"test/devices_tests/light_test.py::TestLight::test_set_off",
"test/devices_tests/light_test.py::TestLight::test_set_on",
"test/devices_tests/light_test.py::TestLight::test_set_tw",
"test/devices_tests/light_test.py::TestLight::test_set_tw_unsupported",
"test/devices_tests/light_test.py::TestLight::test_supports_color_false",
"test/devices_tests/light_test.py::TestLight::test_supports_color_temp_false",
"test/devices_tests/light_test.py::TestLight::test_supports_color_temp_true",
"test/devices_tests/light_test.py::TestLight::test_supports_color_true",
"test/devices_tests/light_test.py::TestLight::test_supports_dimm_no",
"test/devices_tests/light_test.py::TestLight::test_supports_dimm_yes",
"test/devices_tests/light_test.py::TestLight::test_supports_rgbw_false",
"test/devices_tests/light_test.py::TestLight::test_supports_rgbw_true",
"test/devices_tests/light_test.py::TestLight::test_supports_tw_no",
"test/devices_tests/light_test.py::TestLight::test_supports_tw_yes",
"test/devices_tests/light_test.py::TestLight::test_sync",
"test/devices_tests/light_test.py::TestLight::test_sync_state_address",
"test/devices_tests/light_test.py::TestLight::test_wrong_do",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_to_knx_error",
"test/devices_tests/remote_value_color_rgbw_test.py::TestRemoteValueColorRGBW::test_to_process_error"
] | [] | MIT License | 5,371 | 2,052 | [
"xknx/devices/remote_value_color_rgbw.py",
"xknx/exceptions/exception.py"
] |
hgrecco__pint-877 | d655f3251c259eea52beb74cbb16e694eda197ed | 2019-09-11 09:57:06 | 0d09f54a08a3bbd536184137cbb01da5949f5dba | diff --git a/pint/util.py b/pint/util.py
index 587517a..d89b957 100644
--- a/pint/util.py
+++ b/pint/util.py
@@ -327,9 +327,16 @@ class UnitsContainer(Mapping):
def __eq__(self, other):
if isinstance(other, UnitsContainer):
- # Not the same as hash(self); see ParserHelper.__hash__ and __eq__
- return UnitsContainer.__hash__(self) == UnitsContainer.__hash__(other)
- if isinstance(other, string_types):
+ # UnitsContainer.__hash__(self) is not the same as hash(self); see
+ # ParserHelper.__hash__ and __eq__.
+ # Different hashes guarantee that the actual contents are different, but
+ # identical hashes give no guarantee of equality.
+ # e.g. in CPython, hash(-1) == hash(-2)
+ if UnitsContainer.__hash__(self) != UnitsContainer.__hash__(other):
+ return False
+ other = other._d
+
+ elif isinstance(other, string_types):
other = ParserHelper.from_string(other)
other = other._d
@@ -504,9 +511,11 @@ class ParserHelper(UnitsContainer):
self._d, self._hash, self.scale = state
def __eq__(self, other):
- if isinstance(other, self.__class__):
- return self.scale == other.scale and\
+ if isinstance(other, ParserHelper):
+ return (
+ self.scale == other.scale and
super(ParserHelper, self).__eq__(other)
+ )
elif isinstance(other, string_types):
return self == ParserHelper.from_string(other)
elif isinstance(other, Number):
| Bug in compound unit dimensionality/unit reduction after caching?
I came across a bizarre bug that popped up on the master branch recently:
```python
import pint
ureg = pint.UnitRegistry()
print(ureg('joule').to_base_units())
print(ureg('joule * second ** 2 / kilogram / meter').to_base_units())
```
gives the following traceback
```
DimensionalityError Traceback (most recent call last)
<ipython-input-18-62589ad7e050> in <module>
----> 1 print((1 * ureg.joule * ureg.second ** 2 / ureg.kilogram / ureg.meter).to_base_units())
~/dev/pint/pint/quantity.py in to_base_units(self)
478 _, other = self._REGISTRY._get_base_units(self._units)
479
--> 480 magnitude = self._convert_magnitude_not_inplace(other)
481
482 return self.__class__(magnitude, other)
~/dev/pint/pint/quantity.py in _convert_magnitude_not_inplace(self, other, *contexts, **ctx_kwargs)
406 return self._REGISTRY.convert(self._magnitude, self._units, other)
407
--> 408 return self._REGISTRY.convert(self._magnitude, self._units, other)
409
410 def _convert_magnitude(self, other, *contexts, **ctx_kwargs):
~/dev/pint/pint/registry.py in convert(self, value, src, dst, inplace)
707 return value
708
--> 709 return self._convert(value, src, dst, inplace)
710
711 def _convert(self, value, src, dst, inplace=False, check_dimensionality=True):
~/dev/pint/pint/registry.py in _convert(self, value, src, dst, inplace)
1237 value, src = src._magnitude, src._units
1238
-> 1239 return super(ContextRegistry, self)._convert(value, src, dst, inplace)
1240
1241 def _get_compatible_units(self, input_units, group_or_system):
~/dev/pint/pint/registry.py in _convert(self, value, src, dst, inplace)
990
991 if not (src_offset_unit or dst_offset_unit):
--> 992 return super(NonMultiplicativeRegistry, self)._convert(value, src, dst, inplace)
993
994 src_dim = self._get_dimensionality(src)
~/dev/pint/pint/registry.py in _convert(self, value, src, dst, inplace, check_dimensionality)
729 # then the conversion cannot be performed.
730 if src_dim != dst_dim:
--> 731 raise DimensionalityError(src, dst, src_dim, dst_dim)
732
733 # Here src and dst have only multiplicative units left. Thus we can
DimensionalityError: Cannot convert from 'joule * second ** 2 / kilogram / meter' ([length]) to 'dimensionless' (dimensionless)
```
but it works fine after removing the `joule` line.
After digging through the commit history, it seems to lead back to https://github.com/hgrecco/pint/commit/a9a97ba98167a6a20df874a14343d303a3cd2163, since any point in the history I checked including that commit fails with this error, and any without it returns the expected result of `1.0 meter` for the second line. Glancing through the changes made there, it makes sense that it is tied to this seemingly cache-related issue. That being said, I have no clue what is particularly going wrong here or what exactly in https://github.com/hgrecco/pint/commit/a9a97ba98167a6a20df874a14343d303a3cd2163 could have caused this.
In case it helps with troubleshooting...
...the error no longer arises when any of the compound pieces of the last unit are removed.
...it still arises with the following:
```python
import pint
ureg = pint.UnitRegistry()
print((1 * ureg.joule).to_base_units())
print((1 * ureg.joule * ureg.second ** 2 / ureg.kilogram / ureg.meter).to_base_units())
```
...but does not arise with the following:
```python
import pint
ureg = pint.UnitRegistry()
print((1 * ureg.joule).to_base_units())
print((1 * ureg.joule * ureg.second ** 2 / ureg.kilogram / ureg.meter).to('m'))
```
Another oddity is that, despite what the last non-failing example may suggest, this first arose in MetPy through a function that does not use `.to_base_units`, but rather just `.to`. However, after spending a decent amount of time on it, I can't seem to come up with a short example that replicates the failure with `.to`.
ping @crusaderky, in case you may have any insight here with the seemingly problematic commit from https://github.com/hgrecco/pint/pull/864. | hgrecco/pint | diff --git a/pint/testsuite/test_issues.py b/pint/testsuite/test_issues.py
index 6ca7035..e031fe5 100644
--- a/pint/testsuite/test_issues.py
+++ b/pint/testsuite/test_issues.py
@@ -11,7 +11,7 @@ from pint import UnitRegistry
from pint.unit import UnitsContainer
from pint.util import ParserHelper
-from pint.compat import np, long_type
+from pint.compat import np
from pint.errors import UndefinedUnitError, DimensionalityError
from pint.testsuite import QuantityTestCase, helpers
@@ -699,3 +699,21 @@ class TestIssues(QuantityTestCase):
ureg2.define('test123 = 456 kg')
assert ureg1('1 test123').to('kg').magnitude == 123
assert ureg2('1 test123').to('kg').magnitude == 456
+
+ def test_issue876(self):
+ # Same hash must not imply equality.
+
+ # As an implementation detail of CPython, hash(-1) == hash(-2).
+ # This test is useless in potential alternative Python implementations where
+ # hash(-1) != hash(-2); one would need to find hash collisions specific for each
+ # implementation
+
+ a = UnitsContainer({"[mass]": -1})
+ b = UnitsContainer({"[mass]": -2})
+ c = UnitsContainer({"[mass]": -3})
+
+ # Guarantee working on alternative Python implementations
+ assert (hash(-1) == hash(-2)) == (hash(a) == hash(b))
+ assert (hash(-1) == hash(-3)) == (hash(a) == hash(c))
+ assert a != b
+ assert a != c
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 0.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
exceptiongroup==1.2.2
iniconfig==2.1.0
packaging==24.2
-e git+https://github.com/hgrecco/pint.git@d655f3251c259eea52beb74cbb16e694eda197ed#egg=Pint
pluggy==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
tomli==2.2.1
| name: pint
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- tomli==2.2.1
prefix: /opt/conda/envs/pint
| [
"pint/testsuite/test_issues.py::TestIssues::test_issue876"
] | [] | [
"pint/testsuite/test_issues.py::TestIssues::test_alternative_angstrom_definition",
"pint/testsuite/test_issues.py::TestIssues::test_angstrom_creation",
"pint/testsuite/test_issues.py::TestIssues::test_issue104",
"pint/testsuite/test_issues.py::TestIssues::test_issue105",
"pint/testsuite/test_issues.py::TestIssues::test_issue121",
"pint/testsuite/test_issues.py::TestIssues::test_issue170",
"pint/testsuite/test_issues.py::TestIssues::test_issue252",
"pint/testsuite/test_issues.py::TestIssues::test_issue29",
"pint/testsuite/test_issues.py::TestIssues::test_issue323",
"pint/testsuite/test_issues.py::TestIssues::test_issue339",
"pint/testsuite/test_issues.py::TestIssues::test_issue354_356_370",
"pint/testsuite/test_issues.py::TestIssues::test_issue45",
"pint/testsuite/test_issues.py::TestIssues::test_issue468",
"pint/testsuite/test_issues.py::TestIssues::test_issue50",
"pint/testsuite/test_issues.py::TestIssues::test_issue52",
"pint/testsuite/test_issues.py::TestIssues::test_issue523",
"pint/testsuite/test_issues.py::TestIssues::test_issue532",
"pint/testsuite/test_issues.py::TestIssues::test_issue54",
"pint/testsuite/test_issues.py::TestIssues::test_issue54_related",
"pint/testsuite/test_issues.py::TestIssues::test_issue61",
"pint/testsuite/test_issues.py::TestIssues::test_issue61_notNP",
"pint/testsuite/test_issues.py::TestIssues::test_issue62",
"pint/testsuite/test_issues.py::TestIssues::test_issue625a",
"pint/testsuite/test_issues.py::TestIssues::test_issue625b",
"pint/testsuite/test_issues.py::TestIssues::test_issue625c",
"pint/testsuite/test_issues.py::TestIssues::test_issue655a",
"pint/testsuite/test_issues.py::TestIssues::test_issue655b",
"pint/testsuite/test_issues.py::TestIssues::test_issue66",
"pint/testsuite/test_issues.py::TestIssues::test_issue66b",
"pint/testsuite/test_issues.py::TestIssues::test_issue69",
"pint/testsuite/test_issues.py::TestIssues::test_issue783",
"pint/testsuite/test_issues.py::TestIssues::test_issue85",
"pint/testsuite/test_issues.py::TestIssues::test_issue856",
"pint/testsuite/test_issues.py::TestIssues::test_issue856b",
"pint/testsuite/test_issues.py::TestIssues::test_issue86",
"pint/testsuite/test_issues.py::TestIssues::test_issue93",
"pint/testsuite/test_issues.py::TestIssues::test_issues86b",
"pint/testsuite/test_issues.py::TestIssues::test_micro_creation"
] | [] | BSD | 5,379 | 401 | [
"pint/util.py"
] |
|
barrust__pyspellchecker-54 | 7fe45c07fb80e323db0e1a83777c5afe061a989b | 2019-09-11 11:57:18 | aa9668243fef58ff62c505a727b4a7284b81f42a | coveralls:
[](https://coveralls.io/builds/25651605)
Coverage decreased (-0.3%) to 98.79% when pulling **07a933092d1a3f667cd6fee3fd17b77cf7241dd8 on hotfix/more-encoding** into **7fe45c07fb80e323db0e1a83777c5afe061a989b on master**.
| diff --git a/spellchecker/info.py b/spellchecker/info.py
index 26de3b9..4ff8196 100644
--- a/spellchecker/info.py
+++ b/spellchecker/info.py
@@ -5,7 +5,7 @@ __author__ = "Tyler Barrus"
__maintainer__ = "Tyler Barrus"
__email__ = "[email protected]"
__license__ = "MIT"
-__version__ = "0.5.1"
+__version__ = "0.5.2"
__credits__ = ["Peter Norvig"]
__url__ = "https://github.com/barrust/pyspellchecker"
__bugtrack_url__ = "{0}/issues".format(__url__)
diff --git a/spellchecker/spellchecker.py b/spellchecker/spellchecker.py
index a32cf10..589c77b 100644
--- a/spellchecker/spellchecker.py
+++ b/spellchecker/spellchecker.py
@@ -7,7 +7,7 @@ import json
import string
from collections import Counter
-from .utils import load_file, write_file, _parse_into_words
+from .utils import load_file, write_file, _parse_into_words, ENSURE_UNICODE
class SpellChecker(object):
@@ -62,10 +62,12 @@ class SpellChecker(object):
def __contains__(self, key):
""" setup easier known checks """
+ key = ENSURE_UNICODE(key)
return key in self._word_frequency
def __getitem__(self, key):
""" setup easier frequency checks """
+ key = ENSURE_UNICODE(key)
return self._word_frequency[key]
@property
@@ -105,6 +107,7 @@ class SpellChecker(object):
text (str): The text to split into individual words
Returns:
list(str): A listing of all words in the provided text """
+ text = ENSURE_UNICODE(text)
return self._tokenizer(text)
def export(self, filepath, encoding="utf-8", gzipped=True):
@@ -131,6 +134,7 @@ class SpellChecker(object):
float: The probability that the word is the correct word """
if total_words is None:
total_words = self._word_frequency.total_words
+ word = ENSURE_UNICODE(word)
return self._word_frequency.dictionary[word] / total_words
def correction(self, word):
@@ -140,6 +144,7 @@ class SpellChecker(object):
word (str): The word to correct
Returns:
str: The most likely candidate """
+ word = ENSURE_UNICODE(word)
candidates = list(self.candidates(word))
return max(sorted(candidates), key=self.word_probability)
@@ -151,6 +156,7 @@ class SpellChecker(object):
word (str): The word for which to calculate candidate spellings
Returns:
set: The set of words that are possible candidates """
+ word = ENSURE_UNICODE(word)
if self.known([word]): # short-cut if word is correct already
return {word}
# get edit distance 1...
@@ -174,6 +180,7 @@ class SpellChecker(object):
Returns:
set: The set of those words from the input that are in the \
corpus """
+ words = [ENSURE_UNICODE(w) for w in words]
tmp = [w if self._case_sensitive else w.lower() for w in words]
return set(
w
@@ -191,6 +198,7 @@ class SpellChecker(object):
Returns:
set: The set of those words from the input that are not in \
the corpus """
+ words = [ENSURE_UNICODE(w) for w in words]
tmp = [
w if self._case_sensitive else w.lower()
for w in words
@@ -207,7 +215,7 @@ class SpellChecker(object):
Returns:
set: The set of strings that are edit distance one from the \
provided word """
- word = word.lower()
+ word = ENSURE_UNICODE(word).lower()
if self._check_if_should_check(word) is False:
return {word}
letters = self._word_frequency.letters
@@ -227,7 +235,7 @@ class SpellChecker(object):
Returns:
set: The set of strings that are edit distance two from the \
provided word """
- word = word.lower()
+ word = ENSURE_UNICODE(word).lower()
return [
e2 for e1 in self.edit_distance_1(word) for e2 in self.edit_distance_1(e1)
]
@@ -241,8 +249,13 @@ class SpellChecker(object):
Returns:
set: The set of strings that are edit distance two from the \
provided words """
- words = [word.lower() for word in words]
- return [e2 for e1 in words for e2 in self.edit_distance_1(e1)]
+ words = [ENSURE_UNICODE(w) for w in words]
+ tmp = [
+ w if self._case_sensitive else w.lower()
+ for w in words
+ if self._check_if_should_check(w)
+ ]
+ return [e2 for e1 in tmp for e2 in self.edit_distance_1(e1)]
@staticmethod
def _check_if_should_check(word):
@@ -283,11 +296,13 @@ class WordFrequency(object):
def __contains__(self, key):
""" turn on contains """
+ key = ENSURE_UNICODE(key)
key = key if self._case_sensitive else key.lower()
return key in self._dictionary
def __getitem__(self, key):
""" turn on getitem """
+ key = ENSURE_UNICODE(key)
key = key if self._case_sensitive else key.lower()
return self._dictionary[key]
@@ -298,6 +313,7 @@ class WordFrequency(object):
Args:
key (str): The key to remove
default (obj): The value to return if key is not present """
+ key = ENSURE_UNICODE(key)
key = key if self._case_sensitive else key.lower()
return self._dictionary.pop(key, default)
@@ -344,6 +360,7 @@ class WordFrequency(object):
str: The next `word` in the tokenized string
Note:
This is the same as the `spellchecker.split_words()` """
+ text = ENSURE_UNICODE(text)
for word in self._tokenizer(text):
yield word if self._case_sensitive else word.lower()
@@ -408,6 +425,7 @@ class WordFrequency(object):
text (str): The text to be loaded
tokenizer (function): The function to use to tokenize a string
"""
+ text = ENSURE_UNICODE(text)
if tokenizer:
words = [x if self._case_sensitive else x.lower() for x in tokenizer(text)]
else:
@@ -421,6 +439,7 @@ class WordFrequency(object):
Args:
words (list): The list of words to be loaded """
+ words = [ENSURE_UNICODE(w) for w in words]
self._dictionary.update(
[word if self._case_sensitive else word.lower() for word in words]
)
@@ -431,6 +450,7 @@ class WordFrequency(object):
Args:
word (str): The word to add """
+ word = ENSURE_UNICODE(word)
self.load_words([word])
def remove_words(self, words):
@@ -438,6 +458,7 @@ class WordFrequency(object):
Args:
words (list): The list of words to remove """
+ words = [ENSURE_UNICODE(w) for w in words]
for word in words:
self._dictionary.pop(word if self._case_sensitive else word.lower())
self._update_dictionary()
@@ -447,6 +468,7 @@ class WordFrequency(object):
Args:
word (str): The word to remove """
+ word = ENSURE_UNICODE(word)
self._dictionary.pop(word if self._case_sensitive else word.lower())
self._update_dictionary()
diff --git a/spellchecker/utils.py b/spellchecker/utils.py
index 4abc1f3..aefcc10 100644
--- a/spellchecker/utils.py
+++ b/spellchecker/utils.py
@@ -9,11 +9,22 @@ if sys.version_info < (3, 0):
READMODE = 'rb'
WRITEMODE = 'wb'
OPEN = io.open # hijack this
+
+ def ENSURE_UNICODE(s, encoding='utf-8'):
+ if isinstance(s, str):
+ return s.decode(encoding)
+ return s
+
else:
READMODE = 'rt'
WRITEMODE = 'wt'
OPEN = open
+ def ENSURE_UNICODE(s, encoding='utf-8'):
+ if isinstance(s, bytes):
+ return s.decode(encoding)
+ return s
+
@contextlib.contextmanager
def __gzip_read(filename, mode='rb', encoding='UTF-8'):
| Windows encoding errors not fixed in version 0.5.1
On windows, spellchecker still has problems with encoding errors:
```python
import spellchecker
spell = spellchecker.SpellChecker("de")
# This word is correctly spelled
spell.known(["beschäftigen"]) # output: set()
spell.correction("beschäftigen") # output: 'beschã¤ftigen'
spellchecker.__version__ # output: '0.5.1'
``` | barrust/pyspellchecker | diff --git a/tests/spellchecker_test.py b/tests/spellchecker_test.py
index 97f1512..b200d6b 100644
--- a/tests/spellchecker_test.py
+++ b/tests/spellchecker_test.py
@@ -36,13 +36,13 @@ class TestSpellChecker(unittest.TestCase):
self.assertEqual(spell.candidates('manasaeds'), {'manasaeds'})
def test_words(self):
- ''' rest the parsing of words '''
+ ''' test the parsing of words '''
spell = SpellChecker()
res = ['this', 'is', 'a', 'test', 'of', 'this']
self.assertEqual(spell.split_words('This is a test of this'), res)
def test_words_more_complete(self):
- ''' rest the parsing of words '''
+ ''' test the parsing of words '''
spell = SpellChecker()
res = ['this', 'is', 'a', 'test', 'of', 'the', 'word', 'parser', 'it', 'should', 'work', 'correctly']
self.assertEqual(spell.split_words('This is a test of the word parser. It should work correctly!!!'), res)
@@ -368,3 +368,15 @@ class TestSpellChecker(unittest.TestCase):
self.assertFalse('awesome' in spell)
self.assertTrue(spell['whale'])
self.assertTrue('sea.' in spell)
+
+ def test_bytes_input(self):
+ """ Test using bytes instead of unicode as input """
+
+ var = b"bike"
+
+ here = os.path.dirname(__file__)
+ filepath = '{}/resources/small_dictionary.json'.format(here)
+ spell = SpellChecker(language=None, local_dictionary=filepath)
+
+ self.assertTrue(var in spell)
+ self.assertEqual(spell[var], 60)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 3
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"coverage"
],
"pre_install": [
"pip install -r requirements/requirements-dev.txt"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | astroid==2.15.8
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
coverage==6.5.0
coveralls==3.3.1
dill==0.3.7
docopt==0.6.2
exceptiongroup==1.2.2
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
isort==5.11.5
lazy-object-proxy==1.9.0
mccabe==0.7.0
packaging==24.0
platformdirs==4.0.0
pluggy==1.2.0
pycodestyle==2.10.0
pylint==2.17.7
-e git+https://github.com/barrust/pyspellchecker.git@7fe45c07fb80e323db0e1a83777c5afe061a989b#egg=pyspellchecker
pytest==7.4.4
requests==2.31.0
tomli==2.0.1
tomlkit==0.12.5
typed-ast==1.5.5
typing_extensions==4.7.1
urllib3==2.0.7
wrapt==1.16.0
zipp==3.15.0
| name: pyspellchecker
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- astroid==2.15.8
- charset-normalizer==3.4.1
- coverage==6.5.0
- coveralls==3.3.1
- dill==0.3.7
- docopt==0.6.2
- exceptiongroup==1.2.2
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- isort==5.11.5
- lazy-object-proxy==1.9.0
- mccabe==0.7.0
- packaging==24.0
- platformdirs==4.0.0
- pluggy==1.2.0
- pycodestyle==2.10.0
- pylint==2.17.7
- pytest==7.4.4
- requests==2.31.0
- tomli==2.0.1
- tomlkit==0.12.5
- typed-ast==1.5.5
- typing-extensions==4.7.1
- urllib3==2.0.7
- wrapt==1.16.0
- zipp==3.15.0
prefix: /opt/conda/envs/pyspellchecker
| [
"tests/spellchecker_test.py::TestSpellChecker::test_bytes_input"
] | [] | [
"tests/spellchecker_test.py::TestSpellChecker::test_add_word",
"tests/spellchecker_test.py::TestSpellChecker::test_adding_unicode",
"tests/spellchecker_test.py::TestSpellChecker::test_candidates",
"tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_defaults_to_false",
"tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_case_sensitive_true",
"tests/spellchecker_test.py::TestSpellChecker::test_capitalization_when_language_set",
"tests/spellchecker_test.py::TestSpellChecker::test_checking_odd_word",
"tests/spellchecker_test.py::TestSpellChecker::test_correction",
"tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_invalud",
"tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one",
"tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_one_property",
"tests/spellchecker_test.py::TestSpellChecker::test_edit_distance_two",
"tests/spellchecker_test.py::TestSpellChecker::test_import_export_gzip",
"tests/spellchecker_test.py::TestSpellChecker::test_import_export_json",
"tests/spellchecker_test.py::TestSpellChecker::test_load_external_dictionary",
"tests/spellchecker_test.py::TestSpellChecker::test_load_text_file",
"tests/spellchecker_test.py::TestSpellChecker::test_missing_dictionary",
"tests/spellchecker_test.py::TestSpellChecker::test_pop",
"tests/spellchecker_test.py::TestSpellChecker::test_pop_default",
"tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold",
"tests/spellchecker_test.py::TestSpellChecker::test_remove_by_threshold_using_items",
"tests/spellchecker_test.py::TestSpellChecker::test_remove_word",
"tests/spellchecker_test.py::TestSpellChecker::test_remove_words",
"tests/spellchecker_test.py::TestSpellChecker::test_spanish_dict",
"tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_file",
"tests/spellchecker_test.py::TestSpellChecker::test_tokenizer_provided",
"tests/spellchecker_test.py::TestSpellChecker::test_unique_words",
"tests/spellchecker_test.py::TestSpellChecker::test_unknown_words",
"tests/spellchecker_test.py::TestSpellChecker::test_word_contains",
"tests/spellchecker_test.py::TestSpellChecker::test_word_frequency",
"tests/spellchecker_test.py::TestSpellChecker::test_word_in",
"tests/spellchecker_test.py::TestSpellChecker::test_word_known",
"tests/spellchecker_test.py::TestSpellChecker::test_word_probability",
"tests/spellchecker_test.py::TestSpellChecker::test_words",
"tests/spellchecker_test.py::TestSpellChecker::test_words_more_complete"
] | [] | MIT License | 5,380 | 2,078 | [
"spellchecker/info.py",
"spellchecker/spellchecker.py",
"spellchecker/utils.py"
] |
MycroftAI__lingua-franca-23 | 4c66a5ec87113842519b5f6166695b75ed01a46f | 2019-09-17 23:00:32 | 4c66a5ec87113842519b5f6166695b75ed01a46f | diff --git a/lingua_franca/lang/parse_en.py b/lingua_franca/lang/parse_en.py
index 872c478..2136423 100644
--- a/lingua_franca/lang/parse_en.py
+++ b/lingua_franca/lang/parse_en.py
@@ -1363,7 +1363,7 @@ def extract_datetime_en(string, dateNow, default_time):
idx += used - 1
found = True
# check that we found a date
- if not date_found:
+ if not date_found():
return None
if dayOffset is False:
| extract_datetime(" ") returns today not None
Clearly this is my favourite method looking at the issues list :yum:
An empty string passed to `extract_datetime("")` correctly returns `None`.
However a non-empty string that does not contain a date eg `extract_datetime(" ")` returns a datetime of the current day at midnight.
Documentation for the method:
https://mycroft-core.readthedocs.io/en/stable/source/mycroft.util.html#extract-datetime | MycroftAI/lingua-franca | diff --git a/test/test_parse.py b/test/test_parse.py
index a6533cf..c1510d9 100644
--- a/test/test_parse.py
+++ b/test/test_parse.py
@@ -481,6 +481,10 @@ class TestNormalize(unittest.TestCase):
morning = datetime(2017, 6, 27, 8, 1, 2)
evening = datetime(2017, 6, 27, 20, 1, 2)
noonish = datetime(2017, 6, 27, 12, 1, 2)
+ self.assertEqual(
+ extract_datetime('feed the fish'), None)
+ self.assertEqual(
+ extract_datetime(' '), None)
self.assertEqual(
extract_datetime('feed fish at 10 o\'clock', morning)[0],
datetime(2017, 6, 27, 10, 0, 0))
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup==1.2.2
iniconfig==2.1.0
-e git+https://github.com/MycroftAI/lingua-franca.git@4c66a5ec87113842519b5f6166695b75ed01a46f#egg=lingua_franca
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
python-dateutil==2.6.0
six==1.17.0
tomli==2.2.1
| name: lingua-franca
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- python-dateutil==2.6.0
- six==1.17.0
- tomli==2.2.1
prefix: /opt/conda/envs/lingua-franca
| [
"test/test_parse.py::TestNormalize::test_extract_ambiguous_time_en"
] | [] | [
"test/test_parse.py::TestFuzzyMatch::test_match_one",
"test/test_parse.py::TestFuzzyMatch::test_matches",
"test/test_parse.py::TestNormalize::test_articles",
"test/test_parse.py::TestNormalize::test_combinations",
"test/test_parse.py::TestNormalize::test_contractions",
"test/test_parse.py::TestNormalize::test_extract_duration_en",
"test/test_parse.py::TestNormalize::test_extract_number",
"test/test_parse.py::TestNormalize::test_extract_relativedatetime_en",
"test/test_parse.py::TestNormalize::test_extractdatetime_en",
"test/test_parse.py::TestNormalize::test_gender",
"test/test_parse.py::TestNormalize::test_multiple_numbers",
"test/test_parse.py::TestNormalize::test_numbers",
"test/test_parse.py::TestNormalize::test_spaces"
] | [] | Apache License 2.0 | 5,417 | 156 | [
"lingua_franca/lang/parse_en.py"
] |
|
PSLmodels__ParamTools-80 | 06d113a25a3a6e3193fc3704c40a92cbcfdbe80b | 2019-09-18 22:39:20 | 6ff4fffa1c25e78704739228eb57cc8379ee7870 | diff --git a/paramtools/parameters.py b/paramtools/parameters.py
index a44fc7c..1d91538 100644
--- a/paramtools/parameters.py
+++ b/paramtools/parameters.py
@@ -369,10 +369,14 @@ class Parameters:
f"\nYou may be able to describe this parameter's values with additional "
f"labels\nand the 'label_to_extend' operator."
)
- elif not shape:
- exp_full_shape = 1
- else:
- exp_full_shape = reduce(lambda x, y: x * y, shape)
+ elif not shape and number_dims == 0:
+ data_type = self._numpy_type(param)
+ value = value_items[0]["value"]
+ if data_type == object:
+ return value
+ else:
+ return data_type(value)
+ exp_full_shape = reduce(lambda x, y: x * y, shape)
if len(value_items) != exp_full_shape:
# maintains label value order over value objects.
exp_grid = list(itertools.product(*value_order.values()))
| Scalar values are set as zero dimension arrays
When `array_first` is `True`, scalar values are set to zero dimension NumPy arrays. For the most part, these values act like normal Python `int`s or `float`s, but they cause problems in edge cases. Further, this behavior may confusing for new users. I'll open a PR this week to fix this. | PSLmodels/ParamTools | diff --git a/paramtools/tests/test_parameters.py b/paramtools/tests/test_parameters.py
index 4d91777..d001829 100644
--- a/paramtools/tests/test_parameters.py
+++ b/paramtools/tests/test_parameters.py
@@ -833,7 +833,7 @@ class TestArrayFirst:
[datetime.date(2018, 1, 15)]
]
assert af_params.int_dense_array_param.tolist() == [[[4, 5, 6]]]
- assert af_params.str_choice_param.tolist() == "value0"
+ assert af_params.str_choice_param == "value0"
def test_from_array(self, af_params):
exp = [
@@ -894,6 +894,31 @@ class TestArrayFirst:
with pytest.raises(ParamToolsError):
params.adjust({"arr": [{"label1": 1, "value": [4, 5, 6]}]})
+ def test_array_first_with_zero_dim(self):
+ class ZeroDim(Parameters):
+ defaults = {
+ "myint": {
+ "title": "my int",
+ "description": "",
+ "type": "int",
+ "value": 2,
+ },
+ "mystring": {
+ "title": "my string",
+ "description": "",
+ "type": "str",
+ "value": "hello world",
+ },
+ }
+ array_first = True
+
+ params = ZeroDim()
+ assert params.myint == 2.0
+ assert isinstance(params.myint, np.int64)
+
+ assert params.mystring == "hello world"
+ assert isinstance(params.mystring, str)
+
class TestCollisions:
def test_collision_list(self):
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 0.10 | {
"env_vars": null,
"env_yml_path": [
"environment.yml"
],
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "environment.yml",
"pip_packages": [
"pytest"
],
"pre_install": [
"pre-commit install"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | black==23.3.0
cfgv==3.3.1
click==8.1.8
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
distlib==0.3.9
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1720869315914/work
filelock==3.12.2
flake8==5.0.4
identify==2.5.24
importlib-metadata==4.2.0
iniconfig @ file:///home/conda/feedstock_root/build_artifacts/iniconfig_1673103042956/work
marshmallow @ file:///home/conda/feedstock_root/build_artifacts/marshmallow_1704873359756/work
mccabe==0.7.0
mypy-extensions==1.0.0
nodeenv==1.9.1
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1649806299270/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1696202382185/work
-e git+https://github.com/PSLmodels/ParamTools.git@06d113a25a3a6e3193fc3704c40a92cbcfdbe80b#egg=paramtools
pathspec==0.11.2
platformdirs==2.6.2
pluggy @ file:///home/conda/feedstock_root/build_artifacts/pluggy_1648772594554/work
pre-commit==2.21.0
pycodestyle==2.9.1
pyflakes==2.5.0
pytest @ file:///home/conda/feedstock_root/build_artifacts/pytest_1704035161844/work
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1709299778482/work
PyYAML==6.0.1
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1727974628237/work
typed-ast==1.5.5
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1688315532570/work
virtualenv==20.16.2
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1677313463193/work
| name: ParamTools
channels:
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- ca-certificates=2025.1.31=hbcca054_0
- colorama=0.4.6=pyhd8ed1ab_0
- exceptiongroup=1.2.2=pyhd8ed1ab_0
- importlib_metadata=4.11.4=hd8ed1ab_0
- iniconfig=2.0.0=pyhd8ed1ab_0
- ld_impl_linux-64=2.43=h712a8e2_4
- libblas=3.9.0=31_h59b9bed_openblas
- libcblas=3.9.0=31_he106b2a_openblas
- libffi=3.4.6=h2dba641_0
- libgcc=14.2.0=h767d61c_2
- libgcc-ng=14.2.0=h69a702a_2
- libgfortran=14.2.0=h69a702a_2
- libgfortran5=14.2.0=hf1ad2bd_2
- libgomp=14.2.0=h767d61c_2
- liblapack=3.9.0=31_h7ac8fdf_openblas
- liblzma=5.6.4=hb9d3cd8_0
- liblzma-devel=5.6.4=hb9d3cd8_0
- libnsl=2.0.1=hd590300_0
- libopenblas=0.3.29=pthreads_h94d23a6_0
- libsqlite=3.49.1=hee588c1_2
- libstdcxx=14.2.0=h8f9b012_2
- libstdcxx-ng=14.2.0=h4852527_2
- libzlib=1.3.1=hb9d3cd8_2
- marshmallow=3.20.2=pyhd8ed1ab_0
- ncurses=6.5=h2d0b736_3
- numpy=1.21.6=py37h976b520_0
- openssl=3.4.1=h7b32b05_0
- packaging=23.2=pyhd8ed1ab_0
- pip=24.0=pyhd8ed1ab_0
- pluggy=1.0.0=py37h89c1867_3
- pytest=7.4.4=pyhd8ed1ab_0
- python=3.7.12=hf930737_100_cpython
- python-dateutil=2.9.0=pyhd8ed1ab_0
- python_abi=3.7=4_cp37m
- readline=8.2=h8c095d6_2
- setuptools=69.0.3=pyhd8ed1ab_0
- six=1.16.0=pyh6c4a22f_0
- sqlite=3.49.1=h9eae976_2
- tk=8.6.13=noxft_h4845f30_101
- tomli=2.0.2=pyhd8ed1ab_0
- typing_extensions=4.7.1=pyha770c72_0
- wheel=0.42.0=pyhd8ed1ab_0
- xz=5.6.4=hbcc6ac9_0
- xz-gpl-tools=5.6.4=hbcc6ac9_0
- xz-tools=5.6.4=hb9d3cd8_0
- zipp=3.15.0=pyhd8ed1ab_0
- pip:
- black==23.3.0
- cfgv==3.3.1
- click==8.1.8
- distlib==0.3.9
- filelock==3.12.2
- flake8==5.0.4
- identify==2.5.24
- importlib-metadata==4.2.0
- mccabe==0.7.0
- mypy-extensions==1.0.0
- nodeenv==1.9.1
- paramtools==0.0.0
- pathspec==0.11.2
- platformdirs==2.6.2
- pre-commit==2.21.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pyyaml==6.0.1
- typed-ast==1.5.5
- virtualenv==20.16.2
prefix: /opt/conda/envs/ParamTools
| [
"paramtools/tests/test_parameters.py::TestArrayFirst::test_array_first_with_zero_dim"
] | [] | [
"paramtools/tests/test_parameters.py::test_init",
"paramtools/tests/test_parameters.py::TestSchema::test_empty_schema",
"paramtools/tests/test_parameters.py::TestSchema::test_schema_just_labels",
"paramtools/tests/test_parameters.py::TestSchema::test_schema_just_additional_members",
"paramtools/tests/test_parameters.py::TestSchema::test_schema_not_dropped",
"paramtools/tests/test_parameters.py::TestSchema::test_schema_with_errors",
"paramtools/tests/test_parameters.py::TestSchema::test_actions_spec",
"paramtools/tests/test_parameters.py::TestAccess::test_specification",
"paramtools/tests/test_parameters.py::TestAccess::test_is_ordered",
"paramtools/tests/test_parameters.py::TestAccess::test_specification_query",
"paramtools/tests/test_parameters.py::TestAccess::test_serializable",
"paramtools/tests/test_parameters.py::TestAccess::test_dump",
"paramtools/tests/test_parameters.py::TestAccess::test_dump_with_labels",
"paramtools/tests/test_parameters.py::TestAccess::test_iterable",
"paramtools/tests/test_parameters.py::TestAdjust::test_adjust_int_param",
"paramtools/tests/test_parameters.py::TestAdjust::test_simultaneous_adjust",
"paramtools/tests/test_parameters.py::TestAdjust::test_adjust_many_labels",
"paramtools/tests/test_parameters.py::TestAdjust::test_adjust_none_basic",
"paramtools/tests/test_parameters.py::TestAdjust::test_adjust_none_many_values",
"paramtools/tests/test_parameters.py::TestErrors::test_errors_attribute",
"paramtools/tests/test_parameters.py::TestErrors::test_errors",
"paramtools/tests/test_parameters.py::TestErrors::test_errors_choice_param",
"paramtools/tests/test_parameters.py::TestErrors::test_errors_default_reference_param",
"paramtools/tests/test_parameters.py::TestErrors::test_errors_int_param",
"paramtools/tests/test_parameters.py::TestErrors::test_errors_multiple_params",
"paramtools/tests/test_parameters.py::TestErrors::test_list_type_errors",
"paramtools/tests/test_parameters.py::TestErrors::test_range_validation_on_list_param",
"paramtools/tests/test_parameters.py::TestArray::test_to_array",
"paramtools/tests/test_parameters.py::TestArray::test_from_array",
"paramtools/tests/test_parameters.py::TestArray::test_resolve_order",
"paramtools/tests/test_parameters.py::TestArray::test_to_array_with_state1",
"paramtools/tests/test_parameters.py::TestArray::test_to_array_with_state2",
"paramtools/tests/test_parameters.py::TestState::test_basic_set_state",
"paramtools/tests/test_parameters.py::TestState::test_label_grid",
"paramtools/tests/test_parameters.py::TestState::test_set_state_updates_values",
"paramtools/tests/test_parameters.py::TestState::test_set_state_errors",
"paramtools/tests/test_parameters.py::TestState::test_state_with_list",
"paramtools/tests/test_parameters.py::TestArrayFirst::test_basic",
"paramtools/tests/test_parameters.py::TestArrayFirst::test_from_array",
"paramtools/tests/test_parameters.py::TestArrayFirst::test_to_array_with_nd_lists",
"paramtools/tests/test_parameters.py::TestCollisions::test_collision_list",
"paramtools/tests/test_parameters.py::TestCollisions::test_collision",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_num",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_categorical",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_w_array",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_adj",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_adj_w_errors",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_adj_nonextend_param",
"paramtools/tests/test_parameters.py::TestExtend::test_extend_adj_w_set_state",
"paramtools/tests/test_parameters.py::TestExtend::test_custom_extend",
"paramtools/tests/test_parameters.py::TestIndex::test_index_simple",
"paramtools/tests/test_parameters.py::TestIndex::test_related_param_errors"
] | [] | MIT License | 5,427 | 258 | [
"paramtools/parameters.py"
] |
|
nipy__heudiconv-376 | fe0f2c89b96f6ee7f63d44ed3b19f004ba977c56 | 2019-09-20 04:25:31 | cbacd82f0576a71ccb9dd7a59e995a8db4080253 | diff --git a/heudiconv/bids.py b/heudiconv/bids.py
index 1bbfcf6..9569278 100644
--- a/heudiconv/bids.py
+++ b/heudiconv/bids.py
@@ -240,6 +240,27 @@ def add_participant_record(studydir, subject, age, sex):
known_subjects = {l.split('\t')[0] for l in f.readlines()}
if participant_id in known_subjects:
return
+ else:
+ # Populate particpants.json (an optional file to describe column names in
+ # participant.tsv). This auto generation will make BIDS-validator happy.
+ participants_json = op.join(studydir, 'participants.json')
+ if not op.lexists(participants_json):
+ save_json(participants_json,
+ OrderedDict([
+ ("participant_id", OrderedDict([
+ ("Description", "Participant identifier")])),
+ ("age", OrderedDict([
+ ("Description", "Age in years (TODO - verify) as in the initial"
+ " session, might not be correct for other sessions")])),
+ ("sex", OrderedDict([
+ ("Description", "self-rated by participant, M for male/F for "
+ "female (TODO: verify)")])),
+ ("group", OrderedDict([
+ ("Description", "(TODO: adjust - by default everyone is in "
+ "control group)")])),
+ ]),
+ sort_keys=False,
+ indent=2)
# Add a new participant
with open(participants_tsv, 'a') as f:
f.write(
@@ -311,7 +332,8 @@ def save_scans_key(item, bids_files):
def add_rows_to_scans_keys_file(fn, newrows):
"""
- Add new rows to file fn for scans key filename
+ Add new rows to file fn for scans key filename and generate accompanying json
+ descriptor to make BIDS validator happy.
Parameters
----------
@@ -334,6 +356,25 @@ def add_rows_to_scans_keys_file(fn, newrows):
os.unlink(fn)
else:
fnames2info = newrows
+ # Populate _scans.json (an optional file to describe column names in
+ # _scans.tsv). This auto generation will make BIDS-validator happy.
+ scans_json = '.'.join(fn.split('.')[:-1] + ['json'])
+ if not op.lexists(scans_json):
+ save_json(scans_json,
+ OrderedDict([
+ ("filename", OrderedDict([
+ ("Description", "Name of the nifti file")])),
+ ("acq_time", OrderedDict([
+ ("LongName", "Acquisition time"),
+ ("Description", "Acquisition time of the particular scan")])),
+ ("operator", OrderedDict([
+ ("Description", "Name of the operator")])),
+ ("randstr", OrderedDict([
+ ("LongName", "Random string"),
+ ("Description", "md5 hash of UIDs")])),
+ ]),
+ sort_keys=False,
+ indent=2)
header = ['filename', 'acq_time', 'operator', 'randstr']
# prepare all the data rows
| Generate participants.json to accompany .tsv where fields are described
ATM we are generating `participants.tsv` with columns age, sex, group. bids-validator then whines
```
./participants.tsv
Evidence: Columns: age, sex, group not defined, please define in: /participants.json
```
We should then create/define something like
```json
{
"participant_id": {
"Description": "participant id"
},
"age": {
"Description": "Age in years (TODO - verify) as in the initial session, might not be correct for other sessions"
},
"sex": {
"Description": "self-rated by participant, M for male/F for female (TODO: verify)"
},
"group": {
"Description": "(TODO: adjust - by default everyone is in control group)"
}
}
``` | nipy/heudiconv | diff --git a/heudiconv/tests/test_heuristics.py b/heudiconv/tests/test_heuristics.py
index c9485c6..f36bbb4 100644
--- a/heudiconv/tests/test_heuristics.py
+++ b/heudiconv/tests/test_heuristics.py
@@ -165,7 +165,13 @@ def test_notop(tmpdir, bidsoptions):
runner(args)
assert op.exists(pjoin(tmppath, 'Halchenko/Yarik/950_bids_test4'))
- for fname in ['CHANGES', 'dataset_description.json', 'participants.tsv', 'README']:
+ for fname in [
+ 'CHANGES',
+ 'dataset_description.json',
+ 'participants.tsv',
+ 'README',
+ 'participants.json'
+ ]:
if 'notop' in bidsoptions:
assert not op.exists(pjoin(tmppath, 'Halchenko/Yarik/950_bids_test4', fname))
else:
diff --git a/heudiconv/tests/test_main.py b/heudiconv/tests/test_main.py
index 7dcd32e..f4b7700 100644
--- a/heudiconv/tests/test_main.py
+++ b/heudiconv/tests/test_main.py
@@ -22,6 +22,7 @@ from mock import patch
from os.path import join as opj
from six.moves import StringIO
import stat
+import os.path as op
@patch('sys.stdout', new_callable=StringIO)
@@ -205,6 +206,7 @@ def test_add_rows_to_scans_keys_file(tmpdir):
assert dates == sorted(dates)
_check_rows(fn, rows)
+ assert op.exists(opj(tmpdir.strpath, 'file.json'))
# add a new one
extra_rows = {
'a_new_file.nii.gz': ['2016adsfasd23', '', 'fasadfasdf'],
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": [
"apt-get update",
"apt-get install -y gcc dcm2niix"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | annexremote==1.6.6
appdirs==1.4.4
attrs==22.2.0
boto==2.49.0
certifi==2021.5.30
cffi==1.15.1
chardet==5.0.0
charset-normalizer==2.0.12
ci-info==0.2.0
click==8.0.4
cryptography==40.0.2
datalad==0.15.6
dcmstack==0.9.0
decorator==4.4.2
Deprecated==1.2.18
etelemetry==0.2.2
fasteners==0.19
filelock==3.4.1
-e git+https://github.com/nipy/heudiconv.git@fe0f2c89b96f6ee7f63d44ed3b19f004ba977c56#egg=heudiconv
humanize==3.14.0
idna==3.10
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
inotify==0.2.10
iso8601==1.1.0
isodate==0.6.1
jeepney==0.7.1
keyring==23.4.1
keyrings.alt==4.1.0
lxml==5.3.1
mock==5.2.0
msgpack==1.0.5
networkx==2.5.1
nibabel==3.2.2
nipype==1.7.1
nose==1.3.7
numpy==1.19.5
packaging==21.3
pathlib==1.0.1
patool==1.12
pluggy==1.0.0
prov==2.0.1
py==1.11.0
pycparser==2.21
pydicom==2.3.1
pydot==1.4.2
PyGithub==1.56
PyJWT==2.4.0
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
python-gitlab==2.10.1
rdflib==5.0.0
requests==2.27.1
requests-toolbelt==1.0.0
scipy==1.5.4
SecretStorage==3.3.3
simplejson==3.20.1
six==1.17.0
swebench-matterhorn @ file:///swebench_matterhorn
tinydb==4.7.0
tomli==1.2.3
tqdm==4.64.1
traits==6.4.1
typing_extensions==4.1.1
urllib3==1.26.20
Whoosh==2.7.4
wrapt==1.16.0
zipp==3.6.0
| name: heudiconv
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- annexremote==1.6.6
- appdirs==1.4.4
- attrs==22.2.0
- boto==2.49.0
- cffi==1.15.1
- chardet==5.0.0
- charset-normalizer==2.0.12
- ci-info==0.2.0
- click==8.0.4
- cryptography==40.0.2
- datalad==0.15.6
- dcmstack==0.9.0
- decorator==4.4.2
- deprecated==1.2.18
- etelemetry==0.2.2
- fasteners==0.19
- filelock==3.4.1
- humanize==3.14.0
- idna==3.10
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- inotify==0.2.10
- iso8601==1.1.0
- isodate==0.6.1
- jeepney==0.7.1
- keyring==23.4.1
- keyrings-alt==4.1.0
- lxml==5.3.1
- mock==5.2.0
- msgpack==1.0.5
- networkx==2.5.1
- nibabel==3.2.2
- nipype==1.7.1
- nose==1.3.7
- numpy==1.19.5
- packaging==21.3
- pathlib==1.0.1
- patool==1.12
- pluggy==1.0.0
- prov==2.0.1
- py==1.11.0
- pycparser==2.21
- pydicom==2.3.1
- pydot==1.4.2
- pygithub==1.56
- pyjwt==2.4.0
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- python-gitlab==2.10.1
- rdflib==5.0.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- scipy==1.5.4
- secretstorage==3.3.3
- simplejson==3.20.1
- six==1.17.0
- swebench-matterhorn==0.0.0
- tinydb==4.7.0
- tomli==1.2.3
- tqdm==4.64.1
- traits==6.4.1
- typing-extensions==4.1.1
- urllib3==1.26.20
- whoosh==2.7.4
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/heudiconv
| [
"heudiconv/tests/test_heuristics.py::test_notop[bidsoptions1]",
"heudiconv/tests/test_main.py::test_add_rows_to_scans_keys_file"
] | [
"heudiconv/tests/test_heuristics.py::test_reproin_largely_smoke[--files",
"heudiconv/tests/test_heuristics.py::test_reproin_largely_smoke[-d",
"heudiconv/tests/test_main.py::test_prepare_for_datalad"
] | [
"heudiconv/tests/test_heuristics.py::test_smoke_convertall",
"heudiconv/tests/test_heuristics.py::test_scans_keys_reproin[--files",
"heudiconv/tests/test_heuristics.py::test_ls",
"heudiconv/tests/test_heuristics.py::test_scout_conversion",
"heudiconv/tests/test_heuristics.py::test_notop[bidsoptions0]",
"heudiconv/tests/test_main.py::test_main_help",
"heudiconv/tests/test_main.py::test_main_version",
"heudiconv/tests/test_main.py::test_create_file_if_missing",
"heudiconv/tests/test_main.py::test_populate_bids_templates",
"heudiconv/tests/test_main.py::test_add_participant_record",
"heudiconv/tests/test_main.py::test_get_formatted_scans_key_row",
"heudiconv/tests/test_main.py::test__find_subj_ses",
"heudiconv/tests/test_main.py::test_make_readonly",
"heudiconv/tests/test_main.py::test_cache"
] | [] | Apache License 2.0 | 5,435 | 721 | [
"heudiconv/bids.py"
] |
|
cisagov__ioc-scanner-5 | e6bfb28c3ed13c4c9df9b93854fb3e6bd0454bf0 | 2019-09-20 15:56:41 | 18ec49d5d06505700ca6d706e9757d02aa404f8f | diff --git a/src/ioc_scan/ioc_scanner.py b/src/ioc_scan/ioc_scanner.py
index 5d98b02..94cd952 100755
--- a/src/ioc_scan/ioc_scanner.py
+++ b/src/ioc_scan/ioc_scanner.py
@@ -11,11 +11,10 @@ This script should be run as a priveledged user.
from collections import defaultdict
from datetime import datetime
+import hashlib
import logging
-import platform
+import os
import re
-import subprocess # nosec
-from string import Template
import sys
# just paste the text that has the indicators into BLOB.
@@ -35,6 +34,11 @@ MDS: e803916dd56996d7dald4013d71d05dd
MDS: 2a2410cef5497cbd3f6c13eaff9619da
MDS: 3e7eb6abcce304de0822a618de756fd2
MDS: 350cba65e28c723cbf0724c19bd7ee69
+SHA256: b509f8545501588ecd828f970d91afc7c4aa6e238e838bd6a08ee2cd920fbe98
+SHA-1: 31B54AEBDAF5FBC73A66AC41CCB35943CC9B7F72
+SHA-1: 50973A3FC57D70C7911F7A952356188B9939E56B
+SHA-1: 244EB62B9AC30934098CA4204447440D6FC4E259
+SHA-1: 5C8F83CC4FF57E7C67925DF4D9DAABE5D0CC07E2
few things that should hit:
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
0313fd399b143fc40cd52a1679018305
@@ -44,11 +48,30 @@ EICAR test file
69630e4574ec6798239b091cda43dca0
"""
-MD5_RE = r"([a-fA-F\d]{32})"
-COMMANDS = {
- "Linux": Template(r"find $root -xdev -type f -exec md5sum {} \;"),
- "Darwin": Template(r"find $root -xdev -type f -exec md5 -r {} \;"),
-}
+# use word boundaries ('\b') to bracket the specific hash lengths
+MD5_RE = r"\b([a-fA-F\d]{32})\b"
+SHA1_RE = r"\b([a-fA-F\d]{40})\b"
+SHA256_RE = r"\b([a-fA-F\d]{64})\b"
+
+
+def hash_file(file):
+ """Generate MD5, SHA1, and SHA256 hashes for a given file."""
+ hash_md5 = hashlib.md5() # nosec
+ hash_sha1 = hashlib.sha1() # nosec
+ hash_sha256 = hashlib.sha256()
+
+ # try except to eat filesystem errors like Permission Denied etc
+ try:
+ with open(file, "rb") as f:
+ # read it in chunks so memory use isn't outlandish
+ for chunk in iter(lambda: f.read(4096), b""):
+ hash_md5.update(chunk)
+ hash_sha1.update(chunk)
+ hash_sha256.update(chunk)
+ except OSError:
+ pass
+
+ return (hash_md5.hexdigest(), hash_sha1.hexdigest(), hash_sha256.hexdigest())
def main(blob=None, root="/"):
@@ -60,39 +83,48 @@ def main(blob=None, root="/"):
blob = BLOB
# get a list of all the md5 hashes from some inconsiderate source.
- indicators = re.findall(MD5_RE, blob.lower())
+ indicators_md5 = re.findall(MD5_RE, blob.lower())
+ indicators_sha1 = re.findall(SHA1_RE, blob.lower())
+ indicators_sha256 = re.findall(SHA256_RE, blob.lower())
+ indicators = indicators_md5 + indicators_sha1 + indicators_sha256
+
logging.debug(f"Scan will search for {len(indicators)} indicators")
# compile a regular expression to search for all indicators
indicators_re = re.compile("|".join(indicators))
- # choose the correct command based on the platform, and apply root to template
- command = COMMANDS.get(platform.system()).substitute(root=root)
- logging.debug(f"Scan command: {command}")
-
# start hashing files
logging.debug(f"Starting scan with root: {root}")
- p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) # nosec
- logging.debug("Scan completed")
+ # store an array of ioc hits
+ ioc_list = []
# keep a tally of the hits
tallies = defaultdict(lambda: 0)
+ # walk the filesystem starting at root
+ for rootdir, subdirs, files in os.walk(root):
+ # find -xdev equivalent
+ subdirs[:] = [
+ d for d in subdirs if not os.path.ismount(os.path.join(rootdir, d))
+ ]
+
+ # check each file in the current directory
+ for file in [os.path.join(rootdir, f) for f in files]:
+ # get hashes for the current file
+ hashes = hash_file(file)
+
+ for hash in hashes:
+ matches = indicators_re.findall(hash)
+
+ # tally it up and report if we get a hit
+ if matches:
+ ioc_list.append(f"{hash} {file}")
+ tallies[hash] += 1
- for line in p.stdout:
- line = line.decode("utf-8")
- # a line looks like this:
- # 0313fd399b143fc40cd52a1679018305 /bin/bash
-
- # save just the hash
- file_hash = line.split()[0]
-
- # check the line for matches
- matches = indicators_re.findall(file_hash)
+ logging.debug("Scan completed")
- # tally it up and report if we get a hit
- if matches:
- print(line)
- tallies[matches[0]] += 1
+ # print all indicators that were found
+ for ioc in ioc_list:
+ print(ioc)
# stop the clock
end_time = datetime.utcnow()
| Support hashes besides MD5
# 🚀 Feature Proposal
Add the ability to scan for other hash types besides MD5, such as SHA-1 and SHA-256.
## Motivation
When we are asked to scan for Indicators of Compromise (IOCs), we occasionally are given SHA-1 and SHA-256 hashes, in addition to MD5 hashes.
## Example
Sample IOC hashes:
```
SHA256: b509f8545501588ecd828f970d91afc7c4aa6e238e838bd6a08ee2cd920fbe98
SHA-1: 31B54AEBDAF5FBC73A66AC41CCB35943CC9B7F72
SHA-1: 50973A3FC57D70C7911F7A952356188B9939E56B
SHA-1: 244EB62B9AC30934098CA4204447440D6FC4E259
SHA-1: 5C8F83CC4FF57E7C67925DF4D9DAABE5D0CC07E2
```
## Pitch
It will give us more comprehensive scanning capabilities.
| cisagov/ioc-scanner | diff --git a/tests/test_ioc_scan.py b/tests/test_ioc_scan.py
index 6ab4b37..33437e3 100644
--- a/tests/test_ioc_scan.py
+++ b/tests/test_ioc_scan.py
@@ -10,6 +10,7 @@ import pytest
import ioc_scan
from ioc_scan import ioc_scan_cli
+from ioc_scan import ioc_scanner
log_levels = (
@@ -50,6 +51,27 @@ def test_log_levels(level):
assert return_code == 0, "main() should return success (0)"
+def test_hash_file_hashing():
+ """Test that hashes are being generated correctly."""
+ hashes = ioc_scanner.hash_file("tests/targets/eicar.txt")
+ assert hashes[0] == "69630e4574ec6798239b091cda43dca0"
+ assert hashes[1] == "cf8bd9dfddff007f75adf4c2be48005cea317c62"
+ assert (
+ hashes[2] == "131f95c51cc819465fa1797f6ccacf9d494aaaff46fa3eac73ae63ffbdfd8267"
+ )
+
+
+def test_hash_file_except():
+ """Test that hash_file() passes when an OSError exception is raised."""
+ hashes = ioc_scanner.hash_file("tests/targets/doesnotexist.txt")
+ # values for hashes of nothing
+ assert hashes[0] == "d41d8cd98f00b204e9800998ecf8427e"
+ assert hashes[1] == "da39a3ee5e6b4b0d3255bfef95601890afd80709"
+ assert (
+ hashes[2] == "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
+ )
+
+
def test_scan_file(capsys):
"""Test running the scanner with an input target file."""
with patch.object(
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 1
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
cfgv==3.3.1
charset-normalizer==3.4.1
coverage==6.5.0
coveralls==3.3.1
distlib==0.3.9
docopt==0.6.2
exceptiongroup==1.2.2
filelock==3.12.2
identify==2.5.24
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
-e git+https://github.com/cisagov/ioc-scanner.git@e6bfb28c3ed13c4c9df9b93854fb3e6bd0454bf0#egg=ioc_scan
nodeenv==1.9.1
packaging==24.0
platformdirs==4.0.0
pluggy==1.2.0
pre-commit==2.21.0
pytest==7.4.4
pytest-cov==4.1.0
PyYAML==6.0.1
requests==2.31.0
tomli==2.0.1
typing_extensions==4.7.1
urllib3==2.0.7
virtualenv==20.26.6
zipp==3.15.0
| name: ioc-scanner
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- cfgv==3.3.1
- charset-normalizer==3.4.1
- coverage==6.5.0
- coveralls==3.3.1
- distlib==0.3.9
- docopt==0.6.2
- exceptiongroup==1.2.2
- filelock==3.12.2
- identify==2.5.24
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- nodeenv==1.9.1
- packaging==24.0
- platformdirs==4.0.0
- pluggy==1.2.0
- pre-commit==2.21.0
- pytest==7.4.4
- pytest-cov==4.1.0
- pyyaml==6.0.1
- requests==2.31.0
- tomli==2.0.1
- typing-extensions==4.7.1
- urllib3==2.0.7
- virtualenv==20.26.6
- zipp==3.15.0
prefix: /opt/conda/envs/ioc-scanner
| [
"tests/test_ioc_scan.py::test_hash_file_hashing",
"tests/test_ioc_scan.py::test_hash_file_except"
] | [] | [
"tests/test_ioc_scan.py::test_version",
"tests/test_ioc_scan.py::test_log_levels[debug]",
"tests/test_ioc_scan.py::test_log_levels[info]",
"tests/test_ioc_scan.py::test_log_levels[warning]",
"tests/test_ioc_scan.py::test_log_levels[error]",
"tests/test_ioc_scan.py::test_log_levels[critical]",
"tests/test_ioc_scan.py::test_scan_file",
"tests/test_ioc_scan.py::test_scan_stdin"
] | [] | Creative Commons Zero v1.0 Universal | 5,437 | 1,682 | [
"src/ioc_scan/ioc_scanner.py"
] |
|
twilio__twilio-python-477 | 77b44a5a18395bf09ae114d24ac9c4b3efb67e78 | 2019-09-20 17:49:59 | fbb516b7431dd2f9393579edea840a94581ac72a | diff --git a/twilio/request_validator.py b/twilio/request_validator.py
index 72cdb6298..c24a52acd 100644
--- a/twilio/request_validator.py
+++ b/twilio/request_validator.py
@@ -21,19 +21,42 @@ def compare(string1, string2):
result = True
for c1, c2 in izip(string1, string2):
result &= c1 == c2
+
return result
def remove_port(uri):
"""Remove the port number from a URI
- :param uri: full URI that Twilio requested on your server
+ :param uri: parsed URI that Twilio requested on your server
:returns: full URI without a port number
:rtype: str
"""
+ if not uri.port:
+ return uri.geturl()
+
new_netloc = uri.netloc.split(':')[0]
new_uri = uri._replace(netloc=new_netloc)
+
+ return new_uri.geturl()
+
+
+def add_port(uri):
+ """Add the port number to a URI
+
+ :param uri: parsed URI that Twilio requested on your server
+
+ :returns: full URI with a port number
+ :rtype: str
+ """
+ if uri.port:
+ return uri.geturl()
+
+ port = 443 if uri.scheme == "https" else 80
+ new_netloc = uri.netloc + ":" + str(port)
+ new_uri = uri._replace(netloc=new_netloc)
+
return new_uri.geturl()
@@ -82,17 +105,21 @@ class RequestValidator(object):
params = {}
parsed_uri = urlparse(uri)
- if parsed_uri.scheme == "https" and parsed_uri.port:
- uri = remove_port(parsed_uri)
+ uri_with_port = add_port(parsed_uri)
+ uri_without_port = remove_port(parsed_uri)
valid_signature = False # Default fail
+ valid_signature_with_port = False
valid_body_hash = True # May not receive body hash, so default succeed
query = parse_qs(parsed_uri.query)
if "bodySHA256" in query and isinstance(params, string_types):
valid_body_hash = compare(self.compute_hash(params), query["bodySHA256"][0])
- valid_signature = compare(self.compute_signature(uri, {}), signature)
- else:
- valid_signature = compare(self.compute_signature(uri, params), signature)
+ params = {}
+
+ # check signature of uri with and without port,
+ # since sig generation on back end is inconsistent
+ valid_signature = compare(self.compute_signature(uri_without_port, params), signature)
+ valid_signature_with_port = compare(self.compute_signature(uri_with_port, params), signature)
- return valid_signature and valid_body_hash
+ return valid_body_hash and (valid_signature or valid_signature_with_port)
| Incoming SMS webhook failing validation (non standard SSL port)
Hi,
I have an incoming SMS webhook using HTTPS but on a non standard port (4433). The incoming POST fails validation unless the changes made in #392 are reverted (i.e. the port number left in for validation). The [security notes](https://www.twilio.com/docs/api/security#notes) don't mention non standard ports. Should only the standard port be removed if using HTTPS?
https://github.com/twilio/twilio-python/blob/119fe50863b2e9df89a56711e8d3410288709593/twilio/request_validator.py#L77 | twilio/twilio-python | diff --git a/tests/unit/test_request_validator.py b/tests/unit/test_request_validator.py
index c7f67d818..245c5ae43 100644
--- a/tests/unit/test_request_validator.py
+++ b/tests/unit/test_request_validator.py
@@ -53,6 +53,20 @@ class ValidationTest(unittest.TestCase):
uri = self.uri.replace(".com", ".com:1234")
assert_true(self.validator.validate(uri, self.params, self.expected))
+ def test_validation_removes_port_on_http(self):
+ expected = "Zmvh+3yNM1Phv2jhDCwEM3q5ebU=" # hash of http uri with port 1234
+ uri = self.uri.replace(".com", ".com:1234").replace("https", "http")
+ assert_true(self.validator.validate(uri, self.params, expected))
+
+ def test_validation_adds_port_on_https(self):
+ expected = "kvajT1Ptam85bY51eRf/AJRuM3w=" # hash of uri with port 443
+ assert_true(self.validator.validate(self.uri, self.params, expected))
+
+ def test_validation_adds_port_on_http(self):
+ uri = self.uri.replace("https", "http")
+ expected = "0ZXoZLH/DfblKGATFgpif+LLRf4=" # hash of uri with port 80
+ assert_true(self.validator.validate(uri, self.params, expected))
+
def test_validation_of_body_succeeds(self):
uri = self.uriWithBody
is_valid = self.validator.validate(uri, self.body, "a9nBmqA0ju/hNViExpshrM61xv4=")
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_issue_reference"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 1
} | 6.31 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
colorama==0.4.5
cryptography==40.0.2
docutils==0.18.1
flake8==5.0.4
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.4.0
iniconfig==1.1.1
jeepney==0.7.1
keyring==23.4.1
mccabe==0.7.0
mock==5.2.0
nose==1.3.7
packaging==21.3
pkginfo==1.10.0
pluggy==1.0.0
py==1.11.0
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
Pygments==2.14.0
PyJWT==2.4.0
pyparsing==3.1.4
PySocks==1.7.1
pytest==7.0.1
pytz==2025.2
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
rfc3986==1.5.0
SecretStorage==3.3.3
six==1.17.0
SocksiPy-branch==1.1
tomli==1.2.3
tqdm==4.64.1
-e git+https://github.com/twilio/twilio-python.git@77b44a5a18395bf09ae114d24ac9c4b3efb67e78#egg=twilio
twine==3.8.0
typing_extensions==4.1.1
urllib3==1.26.20
webencodings==0.5.1
zipp==3.6.0
| name: twilio-python
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- colorama==0.4.5
- cryptography==40.0.2
- docutils==0.18.1
- flake8==5.0.4
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.4.0
- iniconfig==1.1.1
- jeepney==0.7.1
- keyring==23.4.1
- mccabe==0.7.0
- mock==5.2.0
- nose==1.3.7
- packaging==21.3
- pkginfo==1.10.0
- pluggy==1.0.0
- py==1.11.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pygments==2.14.0
- pyjwt==2.4.0
- pyparsing==3.1.4
- pysocks==1.7.1
- pytest==7.0.1
- pytz==2025.2
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- rfc3986==1.5.0
- secretstorage==3.3.3
- six==1.17.0
- socksipy-branch==1.1
- tomli==1.2.3
- tqdm==4.64.1
- twine==3.8.0
- typing-extensions==4.1.1
- urllib3==1.26.20
- webencodings==0.5.1
- zipp==3.6.0
prefix: /opt/conda/envs/twilio-python
| [
"tests/unit/test_request_validator.py::ValidationTest::test_validation_adds_port_on_http",
"tests/unit/test_request_validator.py::ValidationTest::test_validation_adds_port_on_https"
] | [] | [
"tests/unit/test_request_validator.py::ValidationTest::test_compute_hash_unicode",
"tests/unit/test_request_validator.py::ValidationTest::test_compute_signature",
"tests/unit/test_request_validator.py::ValidationTest::test_compute_signature_bytecode",
"tests/unit/test_request_validator.py::ValidationTest::test_validation",
"tests/unit/test_request_validator.py::ValidationTest::test_validation_of_body_succeeds",
"tests/unit/test_request_validator.py::ValidationTest::test_validation_removes_port_on_http",
"tests/unit/test_request_validator.py::ValidationTest::test_validation_removes_port_on_https"
] | [] | MIT License | 5,438 | 660 | [
"twilio/request_validator.py"
] |
|
pddg__uroboros-34 | 45d0b03c86a75d96237cd885d37b8715bf5cfdfc | 2019-09-21 03:15:42 | 73af69c558b7da184f561ded73d05d14095d44ce | diff --git a/uroboros/command.py b/uroboros/command.py
index 1f15f8b..b32f61c 100644
--- a/uroboros/command.py
+++ b/uroboros/command.py
@@ -121,7 +121,7 @@ class Command(metaclass=abc.ABCMeta):
name=cmd.name,
description=cmd.long_description,
help=cmd.short_description,
- parents=[o.get_parser() for o in cmd.options],
+ parents=[o.get_parser() for o in cmd.get_options()],
)
cmd.initialize(sub_parser)
@@ -129,7 +129,7 @@ class Command(metaclass=abc.ABCMeta):
parser = argparse.ArgumentParser(
prog=self.name,
description=self.long_description,
- parents=[o.get_parser() for o in self.options]
+ parents=[o.get_parser() for o in self.get_options()]
)
parser.set_defaults(func=self.run)
return parser
| `Command.get_options()` does not work well
### Expected
```python
class SomeOption(uroboros.Option):
def build_option(self, parser):
parser.add_argument('--opt', type=str)
return parser
class HelloCommand(uroboros.Command):
def get_options():
return [SomeOption()]
HelloCommand().execute()
```
Now, `--opt` can be used.
### Actual
`--opt` does not be appeared in help message, and cannot parse it.
| pddg/uroboros | diff --git a/tests/test_command.py b/tests/test_command.py
index b80b64f..a7eb5f0 100644
--- a/tests/test_command.py
+++ b/tests/test_command.py
@@ -4,6 +4,7 @@ from unittest import mock
import pytest
+from uroboros import Option
from uroboros.errors import CommandDuplicateError
from .base import RootCommand, SecondCommand, ThirdCommand
@@ -318,3 +319,22 @@ class TestCommand(object):
for cmd in commands:
key = "after_validate_{}".format(cmd.name)
assert getattr(args, key, None) == cmd.value
+
+ def test_create_default_parser(self):
+ class Opt(Option):
+ def build_option(self, parser):
+ parser.add_argument("--test", type=str)
+ return parser
+
+ class Cmd(RootCommand):
+ options = [Opt()]
+
+ class Cmd2(RootCommand):
+ def get_options(self):
+ return [Opt()]
+
+ argv = ["--test", "test"]
+ for cmd in [Cmd(), Cmd2()]:
+ cmd_parser = cmd.create_default_parser()
+ args = cmd_parser.parse_args(argv)
+ assert args.test == 'test'
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
flake8==5.0.4
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
mccabe==0.7.0
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pyflakes==2.5.0
pytest==7.1.2
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
-e git+https://github.com/pddg/uroboros.git@45d0b03c86a75d96237cd885d37b8715bf5cfdfc#egg=uroboros
zipp @ file:///croot/zipp_1672387121353/work
| name: uroboros
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- flake8==5.0.4
- importlib-metadata==4.2.0
- mccabe==0.7.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
prefix: /opt/conda/envs/uroboros
| [
"tests/test_command.py::TestCommand::test_create_default_parser"
] | [] | [
"tests/test_command.py::TestCommand::test_build_option[command0]",
"tests/test_command.py::TestCommand::test_build_option[command1]",
"tests/test_command.py::TestCommand::test_build_option[command2]",
"tests/test_command.py::TestCommand::test_validate[command0]",
"tests/test_command.py::TestCommand::test_validate[command1]",
"tests/test_command.py::TestCommand::test_validate[command2]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set0]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set1]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set2]",
"tests/test_command.py::TestCommand::test_register_parent[command_set0]",
"tests/test_command.py::TestCommand::test_register_parent[command_set1]",
"tests/test_command.py::TestCommand::test_register_parent[command_set2]",
"tests/test_command.py::TestCommand::test_add_command[command_set0]",
"tests/test_command.py::TestCommand::test_add_command[command_set1]",
"tests/test_command.py::TestCommand::test_add_command[command_set2]",
"tests/test_command.py::TestCommand::test_multiple_add_command[root_cmd0-add_commands0]",
"tests/test_command.py::TestCommand::test_multiple_add_command[root_cmd1-add_commands1]",
"tests/test_command.py::TestCommand::test_add_others",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set0]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set1]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set2]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set3]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set0-argv0]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set1-argv1]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set2-argv2]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set3-argv3]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set4-argv4]",
"tests/test_command.py::TestCommand::test_add_duplicate_command[command_set0]",
"tests/test_command.py::TestCommand::test_add_duplicate_command[command_set1]",
"tests/test_command.py::TestCommand::test_execute[command_set0-root",
"tests/test_command.py::TestCommand::test_execute[command_set1-root",
"tests/test_command.py::TestCommand::test_execute[command_set2-root-False]",
"tests/test_command.py::TestCommand::test_execute[command_set3-root",
"tests/test_command.py::TestCommand::test_execute[command_set4-root",
"tests/test_command.py::TestCommand::test_execute[command_set5-root-False]",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set0-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set1-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set2-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set3-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set4-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set5-root",
"tests/test_command.py::TestCommand::test_before_validate",
"tests/test_command.py::TestCommand::test_pre_hook[commands0]",
"tests/test_command.py::TestCommand::test_pre_hook[commands1]",
"tests/test_command.py::TestCommand::test_pre_hook[commands2]",
"tests/test_command.py::TestCommand::test_after_validate",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands0]",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands1]",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands2]"
] | [] | Apache License 2.0 | 5,441 | 229 | [
"uroboros/command.py"
] |
|
cdown__srt-53 | 4709bccde76945e32793df125bdd228e3c15297c | 2019-09-22 05:45:30 | 4709bccde76945e32793df125bdd228e3c15297c | smacke: Looks like I broke a test. Will update the tests tomorrow and resend the PR pending philosophical discussion on whether this is actually the desired behavior for srt @cdown
nick-s-b: Hi Chris,
Here is the srt file:
[srt-failing.zip](https://github.com/cdown/srt/files/3639807/srt-failing.zip)
You can test it against it. It was failing at dialog line 79 which is the line 335 in the file itself.
cdown: Hmm, that instantly fails for me:
```
% srt fix-subtitle-indexing -i Red.Beard.1965.720p.BluRay.x264-\[YTS.LT\].eng.srt >/dev/null
Traceback (most recent call last):
File "/home/cdown/.pyenv/versions/3.7.4/bin/srt-fix-subtitle-indexing", line 22, in <module>
main()
File "/home/cdown/.pyenv/versions/3.7.4/bin/srt-fix-subtitle-indexing", line 13, in main
output = srt_tools.utils.compose_suggest_on_fail(args.input, strict=args.strict)
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt_tools/utils.py", line 145, in compose_suggest_on_fail
return srt.compose(subs, strict=strict)
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 418, in compose
return "".join(subtitle.to_srt(strict=strict, eol=eol) for subtitle in subtitles)
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 418, in <genexpr>
return "".join(subtitle.to_srt(strict=strict, eol=eol) for subtitle in subtitles)
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 271, in sort_and_reindex
for sub_num, subtitle in enumerate(sorted(subtitles), start=start_index):
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 353, in parse
end=srt_timestamp_to_timedelta(raw_end),
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 230, in srt_timestamp_to_timedelta
hrs, mins, secs, msecs = (int(x) for x in [ts[:-10], ts[-9:-7], ts[-6:-4], ts[-3:]])
File "/home/cdown/.pyenv/versions/3.7.4/lib/python3.7/site-packages/srt.py", line 230, in <genexpr>
hrs, mins, secs, msecs = (int(x) for x in [ts[:-10], ts[-9:-7], ts[-6:-4], ts[-3:]])
ValueError: invalid literal for int() with base 10: '00:'
```
What's the output of `pip show srt`, assuming you use pip? Maybe it's super old or something.
cdown: Also, for `mpv`, I'm curious -- does it read `00:08:55,1000` as equivalent to `00:08:56,000`, or truncate, or what? We totally can parse 4-digit milliseconds if that's a real thing that happens in SRT files, but I've seen a lot of SRT files nad just never seen a file with it before :-)
cdown: To answer my own question, mpv just uses ffmpeg (via avcodec). ffmpeg interprets it as `00:08:56,000`. Cool, that's precedent for us to do that too.
nick-s-b: Chris, I believe that `mpv` increases the seconds count when it sees 1000ms. I was able to frame-by-frame step with `mpv` and it displayed the line 79 at `8:54,034` and the last time the line was on screen was at `8:56,036`. That makes sense since the movie is 24fps and if we assume it interpreted `,1000` as a full second. So `00:08:55,1000` is really `00:08:56,000`.
````
79
00:08:54,009 --> 00:08:55,1000
and proud.
````
if you have `mpv` installed, you can load up this `srt` with any video file that's longer than 9 min and see for yourself.
smacke: @cdown the reason it silently fails for subsync is because I'm swallowing ValueError in some cases where subtitles have a few spots that don't parse... basically it's my fault.
Thanks for providing comments; I think there is also another bug now that I think about it that occurs if the ms field has < 3 digits -- I would parse 12 as "12 ms" but it should be "120".
cdown: Hmm, I've both never really seen more *or* less than 3 digits in the msec field. That said, if avconv allows it (and actually doesn't just discard it), then I think it's reasonable for us to take an arbitrary number of digits in each field.
My main concern is only performance, but I'll take a look at that more soonish.
smacke: @cdown @nick-s-b my apologies I didn't follow the discussion carefully the first -- I missed the part about how ffmpeg parses "00:08:55,1000" as "00:08:56,000" and not "00:08:55,100". I find it this extremely strange, but if ffmpeg does it probably it's the right thing to do.
And in that case, "00:08:55,12" probably should be parsed as having 12ms and not 120ms as I posited earlier, although I'll check and see what ffmpeg does for <3 digits in order to be sure.
Bizarre!
cdown: I looked at perf and for the regex approach it's totally negligible, so this has the go ahead from that perspective. So I think all that needs doing to this PR is:
1. Make `RGX_TIMESTAMP_PARSEABLE` allow any nonzero number of digits in each column and add an anchor on the end
2. Add a test to check this particular case works properly -- for example, try and compose, then parse SRT file with multi-digit hour/minute/second/msec -- it's expected it will look different, just check the eventual timedeltas are the same after reparsing)
nick-s-b: @smacke @cdown the easiest way to test how srt will behave is to let Aergisub parse it. https://github.com/Aegisub/Aegisub
You can then click on lines and see how it interprets parameters. I've always found this to be the easiest way to see what's wrong.
smacke: @nick-s-b interestingly enough, it looks like Aegisub interprets ms field by truncating to 2 digits, which is not what ffmpeg, mpv, or VLC do.
For Aegisub: "00:08:55,1000" means 10 milliseconds. "00:08:55,4" means 40 milliseconds, and "00:08:55,432" means 43 milliseconds.
For the others (ffmpeg, mpv, VLC), this would be 1000 milliseconds, 4 milliseconds, and 432 milliseconds, respectively.
I personally am inclined toward the way Aegisub does things, but I'm conflicted since ffmpeg/VLC are both kind of gold standards.
cdown: I'm infinitely more willing to go with whatever avconv says since it's used extremely widely than what aegisub says.
coveralls:
[](https://coveralls.io/builds/25958757)
Coverage decreased (-0.8%) to 99.206% when pulling **b75b18c8c64ffaa43000c6bbea2529ca095f418e on smacke:develop** into **4709bccde76945e32793df125bdd228e3c15297c on cdown:develop**.
smacke: @cdown I (finally) updated this PR with the requested changes. I tested locally and it seems OK. I had a fun time learning how to use the property testing library to automatically generate equivalent string timestamps with potentially different fields. | diff --git a/srt.py b/srt.py
index b037419..2808627 100755
--- a/srt.py
+++ b/srt.py
@@ -11,18 +11,27 @@ import logging
import io
+class TimestampParseError(ValueError):
+ pass
+
+
log = logging.getLogger(__name__)
# "." is not technically valid as a delimiter, but many editors create SRT
# files with this delimiter for whatever reason. Many editors and players
# accept it, so we do too.
RGX_TIMESTAMP_MAGNITUDE_DELIM = r"[,.:,.。:]"
-RGX_TIMESTAMP = RGX_TIMESTAMP_MAGNITUDE_DELIM.join([r"\d+"] * 4)
+RGX_TIMESTAMP_FIELD = r"\d+"
+RGX_TIMESTAMP = RGX_TIMESTAMP_MAGNITUDE_DELIM.join([RGX_TIMESTAMP_FIELD] * 4)
+RGX_TIMESTAMP_PARSEABLE = r"^{}$".format(
+ RGX_TIMESTAMP_MAGNITUDE_DELIM.join(["(" + RGX_TIMESTAMP_FIELD + ")"] * 4)
+)
RGX_INDEX = r"\d+"
RGX_PROPRIETARY = r"[^\r\n]*"
RGX_CONTENT = r".*?"
RGX_POSSIBLE_CRLF = r"\r?\n"
+TS_REGEX = re.compile(RGX_TIMESTAMP_PARSEABLE)
SRT_REGEX = re.compile(
r"\s*({idx})\s*{eof}({ts}) +-[ -]> +({ts}) ?({proprietary}){eof}({content})"
# Many sub editors don't add a blank line to the end, and many editors and
@@ -50,7 +59,6 @@ SRT_REGEX = re.compile(
),
re.DOTALL,
)
-TS_LEN = 12
STANDARD_TS_COLON_OFFSET = 2
ZERO_TIMEDELTA = timedelta(0)
@@ -213,21 +221,10 @@ def srt_timestamp_to_timedelta(ts):
:rtype: datetime.timedelta
"""
- # This function is *extremely* hot during parsing, so please keep perf in
- # mind.
-
- if len(ts) < TS_LEN:
- raise ValueError(
- "Expected timestamp length >= {}, but got {} (value: {})".format(
- TS_LEN, len(ts), ts
- )
- )
-
- # Doing this instead of splitting based on the delimiter using re.split
- # with a compiled regex or str.split is ~15% performance improvement during
- # parsing. We need to look from the end because the number of hours may be
- # >2 digits.
- hrs, mins, secs, msecs = (int(x) for x in [ts[:-10], ts[-9:-7], ts[-6:-4], ts[-3:]])
+ match = TS_REGEX.match(ts)
+ if match is None:
+ raise TimestampParseError("Got unparseable timestamp: {}".format(ts))
+ hrs, mins, secs, msecs = map(int, match.groups())
return timedelta(hours=hrs, minutes=mins, seconds=secs, milliseconds=msecs)
| srt is not robust to timestamps whose ms fields do not have exactly 3 digits
First off, awesome library -- I use this in one of my projects and have found it a delight to work with.
Perhaps this isn't an issue with srt but with improperly formatted srt files. Apparently, sometimes the millisecond field in some timestamps does not have exactly 3 digits, and this causes srt to fail to parse the content. Apparently this does happen in practice; see smacke/subsync#45 for an example spotted by one of my users. It would be awesome if srt could be tolerant to such cases. I tested a diff that seems to provide the requested fix, but it does come at perf cost. I'll send a PR and we can discuss more whether the perf hit is justified. | cdown/srt | diff --git a/tests/test_srt.py b/tests/test_srt.py
index bbd6f98..e21b450 100644
--- a/tests/test_srt.py
+++ b/tests/test_srt.py
@@ -88,6 +88,49 @@ def timedeltas(min_value=0, max_value=TIMEDELTA_MAX_DAYS):
return timestamp_strategy
+def equivalent_timestamps(min_value=0, max_value=TIMEDELTA_MAX_DAYS):
+ def string_timestamp(hours, minutes, seconds, msecs, paddings):
+ hours, minutes, seconds, msecs = map(
+ lambda v_and_p: "0" * v_and_p[1] + str(v_and_p[0]),
+ zip((hours, minutes, seconds, msecs), paddings),
+ )
+ return "{}:{}:{},{}".format(hours, minutes, seconds, msecs)
+
+ def ts_field_value():
+ return st.integers(min_value=min_value, max_value=max_value)
+
+ def zero_padding():
+ return st.integers(min_value=0, max_value=2)
+
+ @st.composite
+ def maybe_off_by_one_fields(draw):
+ field = draw(ts_field_value())
+ field_maybe_plus_one = draw(st.integers(min_value=field, max_value=field + 1))
+ return field_maybe_plus_one, field
+
+ def get_equiv_timestamps(h, m, s, ms2, ts1paddings, ts2paddings):
+ h2, h1 = h
+ m2, m1 = m
+ s2, s1 = s
+ ms1 = (
+ (h2 - h1) * 60 * 60 * 1000 + (m2 - m1) * 60 * 1000 + (s2 - s1) * 1000 + ms2
+ )
+ return (
+ string_timestamp(h2, m2, s2, ms2, ts2paddings),
+ string_timestamp(h1, m1, s1, ms1, ts1paddings),
+ )
+
+ return st.builds(
+ get_equiv_timestamps,
+ maybe_off_by_one_fields(),
+ maybe_off_by_one_fields(),
+ maybe_off_by_one_fields(),
+ ts_field_value(),
+ st.tuples(*[zero_padding() for _ in range(4)]),
+ st.tuples(*[zero_padding() for _ in range(4)]),
+ )
+
+
def subtitles(strict=True):
"""A Hypothesis strategy to generate Subtitle objects."""
# max_value settings are just to avoid overflowing TIMEDELTA_MAX_DAYS by
@@ -502,11 +545,18 @@ def test_compose_and_parse_strict_custom_eol(input_subs, eol):
subs_eq(reparsed_subs, input_subs)
+@given(equivalent_timestamps())
+def test_equal_timestamps_despite_different_fields_parsed_as_equal(timestamps):
+ ts1, ts2 = timestamps
+ eq(srt.srt_timestamp_to_timedelta(ts1), srt.srt_timestamp_to_timedelta(ts2))
+
+
@given(timedeltas())
-def test_srt_timestamp_to_timedelta_too_short_raises(ts):
- srt_ts = srt.timedelta_to_srt_timestamp(ts)[:-1]
- with assert_raises(ValueError):
- srt.srt_timestamp_to_timedelta(srt_ts)
+def test_bad_timestamp_format_raises(ts):
+ ts = srt.timedelta_to_srt_timestamp(ts)
+ ts = ts.replace(":", "t", 1)
+ with assert_raises(srt.TimestampParseError):
+ srt.srt_timestamp_to_timedelta(ts)
@given(st.lists(subtitles()), st.lists(st.sampled_from(string.whitespace)))
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 2.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"hypothesis",
"pytest"
],
"pre_install": [],
"python": "3.7",
"reqs_path": [
"tests/requirements.txt",
"docs/requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==24.2.0
Babel==2.14.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
docutils==0.17.1
exceptiongroup==1.2.2
hypothesis==4.57.1
idna==3.10
imagesize==1.4.1
importlib-metadata==6.7.0
iniconfig==2.0.0
Jinja2==3.1.6
MarkupSafe==2.1.5
nose==1.3.7
packaging==24.0
pluggy==1.2.0
Pygments==2.17.2
pytest==7.4.4
pytz==2025.2
requests==2.31.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
Sphinx==2.4.5
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
-e git+https://github.com/cdown/srt.git@4709bccde76945e32793df125bdd228e3c15297c#egg=srt
tomli==2.0.1
typing_extensions==4.7.1
urllib3==2.0.7
zipp==3.15.0
| name: srt
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==24.2.0
- babel==2.14.0
- charset-normalizer==3.4.1
- docutils==0.17.1
- exceptiongroup==1.2.2
- hypothesis==4.57.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- jinja2==3.1.6
- markupsafe==2.1.5
- nose==1.3.7
- packaging==24.0
- pluggy==1.2.0
- pygments==2.17.2
- pytest==7.4.4
- pytz==2025.2
- requests==2.31.0
- snowballstemmer==2.2.0
- sortedcontainers==2.4.0
- sphinx==2.4.5
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==2.0.1
- typing-extensions==4.7.1
- urllib3==2.0.7
- zipp==3.15.0
prefix: /opt/conda/envs/srt
| [
"tests/test_srt.py::test_equal_timestamps_despite_different_fields_parsed_as_equal",
"tests/test_srt.py::test_bad_timestamp_format_raises"
] | [] | [
"tests/test_srt.py::test_compose_and_parse_from_file",
"tests/test_srt.py::test_compose_and_parse_from_file_bom",
"tests/test_srt.py::test_compose_and_parse_strict",
"tests/test_srt.py::test_can_compose_without_ending_blank_line",
"tests/test_srt.py::test_can_compose_without_eol_at_all",
"tests/test_srt.py::test_compose_and_parse_strict_mode",
"tests/test_srt.py::test_timedelta_to_srt_timestamp_can_go_over_24_hours",
"tests/test_srt.py::test_subtitle_equality",
"tests/test_srt.py::test_subtitle_inequality",
"tests/test_srt.py::test_subtitle_from_scratch_equality",
"tests/test_srt.py::test_parsing_spaced_arrow",
"tests/test_srt.py::test_parsing_leading_whitespace",
"tests/test_srt.py::test_parsing_content_with_blank_lines",
"tests/test_srt.py::test_parsing_no_content",
"tests/test_srt.py::test_subs_missing_content_removed",
"tests/test_srt.py::test_subs_starts_before_zero_removed",
"tests/test_srt.py::test_sort_and_reindex",
"tests/test_srt.py::test_sort_and_reindex_no_skip",
"tests/test_srt.py::test_sort_and_reindex_same_start_time_uses_end",
"tests/test_srt.py::test_sort_and_reindex_not_in_place_matches",
"tests/test_srt.py::test_parser_noncontiguous",
"tests/test_srt.py::test_parser_noncontiguous_leading",
"tests/test_srt.py::test_parser_didnt_match_to_end_raises",
"tests/test_srt.py::test_parser_can_parse_with_dot_msec_delimiter",
"tests/test_srt.py::test_parser_can_parse_with_fullwidth_delimiter",
"tests/test_srt.py::test_repr_doesnt_crash",
"tests/test_srt.py::test_parser_accepts_no_newline_no_content",
"tests/test_srt.py::test_compose_and_parse_strict_crlf",
"tests/test_srt.py::test_compose_and_parse_strict_custom_eol",
"tests/test_srt.py::test_can_parse_index_trailing_ws",
"tests/test_srt.py::test_can_parse_index_leading_zeroes"
] | [] | MIT License | 5,449 | 705 | [
"srt.py"
] |
locustio__locust-1088 | 24e0a8e797af2ef182c1cdbe65338f15543b3602 | 2019-09-22 08:14:29 | ba265c5a27676546eb2c9ba8d2cbc975f54c31b7 | codecov[bot]: # [Codecov](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=h1) Report
> Merging [#1088](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=desc) into [master](https://codecov.io/gh/locustio/locust/commit/644496f5548ceb5a9ff4220aa808ec25a4c9a143?src=pr&el=desc) will **decrease** coverage by `0.12%`.
> The diff coverage is `0%`.
[](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1088 +/- ##
==========================================
- Coverage 72.29% 72.16% -0.13%
==========================================
Files 17 17
Lines 1700 1703 +3
Branches 253 254 +1
==========================================
Hits 1229 1229
- Misses 408 410 +2
- Partials 63 64 +1
```
| [Impacted Files](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [locust/stats.py](https://codecov.io/gh/locustio/locust/pull/1088/diff?src=pr&el=tree#diff-bG9jdXN0L3N0YXRzLnB5) | `79.89% <0%> (-0.65%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=footer). Last update [644496f...6b13bea](https://codecov.io/gh/locustio/locust/pull/1088?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/locust/stats.py b/locust/stats.py
index b49beaf3..e27076c3 100644
--- a/locust/stats.py
+++ b/locust/stats.py
@@ -52,6 +52,8 @@ def calculate_response_time_percentile(response_times, num_requests, percent):
processed_count += response_times[response_time]
if(num_requests - processed_count <= num_of_request):
return response_time
+ # if all response times were None
+ return 0
def diff_response_time_dicts(latest, old):
@@ -81,6 +83,10 @@ class RequestStats(object):
def num_requests(self):
return self.total.num_requests
+ @property
+ def num_none_requests(self):
+ return self.total.num_none_requests
+
@property
def num_failures(self):
return self.total.num_failures
@@ -155,6 +161,9 @@ class StatsEntry(object):
num_requests = None
""" The number of requests made """
+ num_none_requests = None
+ """ The number of requests made with a None response time (typically async requests) """
+
num_failures = None
""" Number of failed request """
@@ -214,6 +223,7 @@ class StatsEntry(object):
def reset(self):
self.start_time = time.time()
self.num_requests = 0
+ self.num_none_requests = 0
self.num_failures = 0
self.total_response_time = 0
self.response_times = {}
@@ -246,6 +256,9 @@ class StatsEntry(object):
self.last_request_timestamp = t
def _log_response_time(self, response_time):
+ if response_time is None:
+ self.num_none_requests += 1
+ return
self.total_response_time += response_time
@@ -287,7 +300,7 @@ class StatsEntry(object):
@property
def avg_response_time(self):
try:
- return float(self.total_response_time) / self.num_requests
+ return float(self.total_response_time) / (self.num_requests - self.num_none_requests)
except ZeroDivisionError:
return 0
@@ -295,7 +308,7 @@ class StatsEntry(object):
def median_response_time(self):
if not self.response_times:
return 0
- median = median_from_dict(self.num_requests, self.response_times)
+ median = median_from_dict(self.num_requests - self.num_none_requests, self.response_times) or 0
# Since we only use two digits of precision when calculating the median response time
# while still using the exact values for min and max response times, the following checks
@@ -340,6 +353,7 @@ class StatsEntry(object):
self.start_time = min(self.start_time, other.start_time)
self.num_requests = self.num_requests + other.num_requests
+ self.num_none_requests = self.num_none_requests + other.num_none_requests
self.num_failures = self.num_failures + other.num_failures
self.total_response_time = self.total_response_time + other.total_response_time
self.max_response_time = max(self.max_response_time, other.max_response_time)
@@ -362,6 +376,7 @@ class StatsEntry(object):
"last_request_timestamp": self.last_request_timestamp,
"start_time": self.start_time,
"num_requests": self.num_requests,
+ "num_none_requests": self.num_none_requests,
"num_failures": self.num_failures,
"total_response_time": self.total_response_time,
"max_response_time": self.max_response_time,
@@ -378,6 +393,7 @@ class StatsEntry(object):
"last_request_timestamp",
"start_time",
"num_requests",
+ "num_none_requests",
"num_failures",
"total_response_time",
"max_response_time",
| Samples with response_time None crashes stats.py
Certain async protocols/clients (like websockets, MQ etc) dont really have a response time. They should be able to report a None response time. Using zero is not good enough, because a test may contain a mix of synchronous requests (like HTTP) and async, and we dont want to "taint" the average response time with zeroes.
```
from locust import HttpLocust, task, TaskSet
from locust.events import request_success
class UserBehavior(TaskSet):
@task
def my_task(self):
request_success.fire(request_type="foo", name="bar", response_time=None, response_length=0)
class MyHttpLocust(HttpLocust):
task_set = UserBehavior
```
Result:
```
[2019-09-22 09:57:13,877] lafp-mac-JG5J.int.svenskaspel.se/ERROR/stderr: Traceback (most recent call last):
File "/Users/lafp/git/locust/locust/core.py", line 361, in run
self.execute_next_task()
File "/Users/lafp/git/locust/locust/core.py", line 387, in execute_next_task
self.execute_task(task["callable"], *task["args"], **task["kwargs"])
File "/Users/lafp/git/locust/locust/core.py", line 399, in execute_task
task(self, *args, **kwargs)
File "/Users/lafp/git/locust-plugins/examples/foo.py", line 10, in my_task
request_success.fire(request_type="foo", name="bar", response_time=None, response_length=0)
File "/Users/lafp/git/locust/locust/events.py", line 34, in fire
handler(**kwargs)
File "/Users/lafp/git/locust/locust/stats.py", line 559, in on_request_success
global_stats.log_request(request_type, name, response_time, response_length)
File "/Users/lafp/git/locust/locust/stats.py", line 93, in log_request
self.total.log(response_time, content_length)
File "/Users/lafp/git/locust/locust/stats.py", line 239, in log
self._log_response_time(response_time)
File "/Users/lafp/git/locust/locust/stats.py", line 250, in _log_response_time
self.total_response_time += response_time
TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'
``` | locustio/locust | diff --git a/locust/test/test_runners.py b/locust/test/test_runners.py
index 14ad50f9..ef70f726 100644
--- a/locust/test/test_runners.py
+++ b/locust/test/test_runners.py
@@ -159,6 +159,35 @@ class TestMasterRunner(LocustTestCase):
s = master.stats.get("/", "GET")
self.assertEqual(700, s.median_response_time)
+ def test_slave_stats_report_with_none_response_times(self):
+ class MyTestLocust(Locust):
+ pass
+
+ with mock.patch("locust.rpc.rpc.Server", mocked_rpc_server()) as server:
+ master = MasterLocustRunner(MyTestLocust, self.options)
+ server.mocked_send(Message("client_ready", None, "fake_client"))
+
+ master.stats.get("/mixed", "GET").log(0, 23455)
+ master.stats.get("/mixed", "GET").log(800, 23455)
+ master.stats.get("/mixed", "GET").log(700, 23455)
+ master.stats.get("/mixed", "GET").log(None, 23455)
+ master.stats.get("/mixed", "GET").log(None, 23455)
+ master.stats.get("/mixed", "GET").log(None, 23455)
+ master.stats.get("/mixed", "GET").log(None, 23455)
+ master.stats.get("/onlyNone", "GET").log(None, 23455)
+
+ data = {"user_count":1}
+ events.report_to_master.fire(client_id="fake_client", data=data)
+ master.stats.clear_all()
+
+ server.mocked_send(Message("stats", data, "fake_client"))
+ s1 = master.stats.get("/mixed", "GET")
+ self.assertEqual(700, s1.median_response_time)
+ self.assertEqual(500, s1.avg_response_time)
+ s2 = master.stats.get("/onlyNone", "GET")
+ self.assertEqual(0, s2.median_response_time)
+ self.assertEqual(0, s2.avg_response_time)
+
def test_master_marks_downed_slaves_as_missing(self):
class MyTestLocust(Locust):
pass
@@ -195,7 +224,43 @@ class TestMasterRunner(LocustTestCase):
"user_count": 2,
}, "fake_client"))
self.assertEqual(700, master.stats.total.median_response_time)
-
+
+ def test_master_total_stats_with_none_response_times(self):
+ class MyTestLocust(Locust):
+ pass
+
+ with mock.patch("locust.rpc.rpc.Server", mocked_rpc_server()) as server:
+ master = MasterLocustRunner(MyTestLocust, self.options)
+ server.mocked_send(Message("client_ready", None, "fake_client"))
+ stats = RequestStats()
+ stats.log_request("GET", "/1", 100, 3546)
+ stats.log_request("GET", "/1", 800, 56743)
+ stats.log_request("GET", "/1", None, 56743)
+ stats2 = RequestStats()
+ stats2.log_request("GET", "/2", 700, 2201)
+ stats2.log_request("GET", "/2", None, 2201)
+ stats3 = RequestStats()
+ stats3.log_request("GET", "/3", None, 2201)
+ server.mocked_send(Message("stats", {
+ "stats":stats.serialize_stats(),
+ "stats_total": stats.total.serialize(),
+ "errors":stats.serialize_errors(),
+ "user_count": 1,
+ }, "fake_client"))
+ server.mocked_send(Message("stats", {
+ "stats":stats2.serialize_stats(),
+ "stats_total": stats2.total.serialize(),
+ "errors":stats2.serialize_errors(),
+ "user_count": 2,
+ }, "fake_client"))
+ server.mocked_send(Message("stats", {
+ "stats":stats3.serialize_stats(),
+ "stats_total": stats3.total.serialize(),
+ "errors":stats3.serialize_errors(),
+ "user_count": 2,
+ }, "fake_client"))
+ self.assertEqual(700, master.stats.total.median_response_time)
+
def test_master_current_response_times(self):
class MyTestLocust(Locust):
pass
diff --git a/locust/test/test_stats.py b/locust/test/test_stats.py
index 0250231e..444426c7 100644
--- a/locust/test/test_stats.py
+++ b/locust/test/test_stats.py
@@ -19,12 +19,14 @@ class TestRequestStats(unittest.TestCase):
self.s.log(45, 0)
self.s.log(135, 0)
self.s.log(44, 0)
+ self.s.log(None, 0)
self.s.log_error(Exception("dummy fail"))
self.s.log_error(Exception("dummy fail"))
self.s.log(375, 0)
self.s.log(601, 0)
self.s.log(35, 0)
self.s.log(79, 0)
+ self.s.log(None, 0)
self.s.log_error(Exception("dummy fail"))
def test_percentile(self):
@@ -48,17 +50,17 @@ class TestRequestStats(unittest.TestCase):
self.assertEqual(s.median_response_time, 6099)
def test_total_rps(self):
- self.assertEqual(self.s.total_rps, 7)
+ self.assertEqual(self.s.total_rps, 9)
def test_current_rps(self):
self.stats.total.last_request_timestamp = int(time.time()) + 4
- self.assertEqual(self.s.current_rps, 3.5)
+ self.assertEqual(self.s.current_rps, 4.5)
self.stats.total.last_request_timestamp = int(time.time()) + 25
self.assertEqual(self.s.current_rps, 0)
def test_num_reqs_fails(self):
- self.assertEqual(self.s.num_requests, 7)
+ self.assertEqual(self.s.num_requests, 9)
self.assertEqual(self.s.num_failures, 3)
def test_avg(self):
@@ -76,6 +78,13 @@ class TestRequestStats(unittest.TestCase):
self.assertEqual(self.s.avg_response_time, 420.5)
self.assertEqual(self.s.median_response_time, 85)
+ def test_avg_only_none(self):
+ self.s.reset()
+ self.s.log(None, 123)
+ self.assertEqual(self.s.avg_response_time, 0)
+ self.assertEqual(self.s.median_response_time, 0)
+ self.assertEqual(self.s.get_response_time_percentile(0.5), 0)
+
def test_reset_min_response_time(self):
self.s.reset()
self.s.log(756, 0)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 1
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"codecov",
"flake8",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
codecov==2.1.13
coverage==6.2
dataclasses==0.8
flake8==5.0.4
Flask==2.0.3
gevent==22.10.2
geventhttpclient-wheels==1.3.1.dev3
greenlet==2.0.2
idna==3.10
importlib-metadata==4.2.0
iniconfig==1.1.1
itsdangerous==2.0.1
Jinja2==3.0.3
-e git+https://github.com/locustio/locust.git@24e0a8e797af2ef182c1cdbe65338f15543b3602#egg=locustio
MarkupSafe==2.0.1
mccabe==0.7.0
mock==5.2.0
msgpack-python==0.5.6
packaging==21.3
pluggy==1.0.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
pyparsing==3.1.4
pytest==7.0.1
pyzmq==25.1.2
requests==2.27.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
Werkzeug==2.0.3
zipp==3.6.0
zope.event==4.6
zope.interface==5.5.2
| name: locust
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- codecov==2.1.13
- coverage==6.2
- dataclasses==0.8
- flake8==5.0.4
- flask==2.0.3
- gevent==22.10.2
- geventhttpclient-wheels==1.3.1.dev3
- greenlet==2.0.2
- idna==3.10
- importlib-metadata==4.2.0
- iniconfig==1.1.1
- itsdangerous==2.0.1
- jinja2==3.0.3
- markupsafe==2.0.1
- mccabe==0.7.0
- mock==5.2.0
- msgpack-python==0.5.6
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pyzmq==25.1.2
- requests==2.27.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- werkzeug==2.0.3
- zipp==3.6.0
- zope-event==4.6
- zope-interface==5.5.2
prefix: /opt/conda/envs/locust
| [
"locust/test/test_runners.py::TestMasterRunner::test_master_total_stats_with_none_response_times",
"locust/test/test_runners.py::TestMasterRunner::test_slave_stats_report_with_none_response_times",
"locust/test/test_stats.py::TestRequestStats::test_aggregation",
"locust/test/test_stats.py::TestRequestStats::test_aggregation_min_response_time",
"locust/test/test_stats.py::TestRequestStats::test_aggregation_with_rounding",
"locust/test/test_stats.py::TestRequestStats::test_avg",
"locust/test/test_stats.py::TestRequestStats::test_avg_only_none",
"locust/test/test_stats.py::TestRequestStats::test_current_rps",
"locust/test/test_stats.py::TestRequestStats::test_error_grouping",
"locust/test/test_stats.py::TestRequestStats::test_error_grouping_errors_with_memory_addresses",
"locust/test/test_stats.py::TestRequestStats::test_median",
"locust/test/test_stats.py::TestRequestStats::test_median_out_of_min_max_bounds",
"locust/test/test_stats.py::TestRequestStats::test_num_reqs_fails",
"locust/test/test_stats.py::TestRequestStats::test_percentile",
"locust/test/test_stats.py::TestRequestStats::test_percentile_rounded_down",
"locust/test/test_stats.py::TestRequestStats::test_percentile_rounded_up",
"locust/test/test_stats.py::TestRequestStats::test_reset",
"locust/test/test_stats.py::TestRequestStats::test_reset_min_response_time",
"locust/test/test_stats.py::TestRequestStats::test_serialize_through_message",
"locust/test/test_stats.py::TestRequestStats::test_total_rps"
] | [] | [
"locust/test/test_runners.py::TestLocustRunner::test_weight_locusts",
"locust/test/test_runners.py::TestLocustRunner::test_weight_locusts_fewer_amount_than_locust_classes",
"locust/test/test_runners.py::TestMasterRunner::test_exception_in_task",
"locust/test/test_runners.py::TestMasterRunner::test_exception_is_catched",
"locust/test/test_runners.py::TestMasterRunner::test_master_current_response_times",
"locust/test/test_runners.py::TestMasterRunner::test_master_marks_downed_slaves_as_missing",
"locust/test/test_runners.py::TestMasterRunner::test_master_total_stats",
"locust/test/test_runners.py::TestMasterRunner::test_sends_hatch_data_to_ready_running_hatching_slaves",
"locust/test/test_runners.py::TestMasterRunner::test_slave_connect",
"locust/test/test_runners.py::TestMasterRunner::test_slave_stats_report_median",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_fewer_locusts_than_slaves",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_uneven_locusts",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_zero_locusts",
"locust/test/test_runners.py::TestMessageSerializing::test_message_serialize",
"locust/test/test_stats.py::TestStatsEntryResponseTimesCache::test_diff_response_times_dicts",
"locust/test/test_stats.py::TestStatsEntryResponseTimesCache::test_get_current_response_time_percentile",
"locust/test/test_stats.py::TestStatsEntryResponseTimesCache::test_latest_total_response_times_pruned",
"locust/test/test_stats.py::TestStatsEntryResponseTimesCache::test_response_times_cached",
"locust/test/test_stats.py::TestStatsEntryResponseTimesCache::test_response_times_not_cached_if_not_enabled",
"locust/test/test_stats.py::TestStatsEntry::test_fail_ratio_with_all_failures",
"locust/test/test_stats.py::TestStatsEntry::test_fail_ratio_with_half_failures",
"locust/test/test_stats.py::TestStatsEntry::test_fail_ratio_with_no_failures",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_connection_error",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_content_length",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_named_endpoint",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_no_content_length",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_no_content_length_streaming",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_put",
"locust/test/test_stats.py::TestRequestStatsWithWebserver::test_request_stats_query_variables",
"locust/test/test_stats.py::TestInspectLocust::test_get_task_ratio_dict_relative",
"locust/test/test_stats.py::TestInspectLocust::test_get_task_ratio_dict_total"
] | [] | MIT License | 5,450 | 886 | [
"locust/stats.py"
] |
PyCQA__pyflakes-472 | 5ed30a6b9e4b9c1bb4042e5c5b3b506e52133da4 | 2019-09-23 12:32:58 | 0af480e3351ae40b4ae7f3ce7272a46fd4265dbd | crusaderky: Tested manually; it works
asottile: (counting that as two approvals :laughing:) | diff --git a/pyflakes/checker.py b/pyflakes/checker.py
index 44c6b25..eca2002 100644
--- a/pyflakes/checker.py
+++ b/pyflakes/checker.py
@@ -72,9 +72,11 @@ else:
if PY35_PLUS:
FOR_TYPES = (ast.For, ast.AsyncFor)
LOOP_TYPES = (ast.While, ast.For, ast.AsyncFor)
+ FUNCTION_TYPES = (ast.FunctionDef, ast.AsyncFunctionDef)
else:
FOR_TYPES = (ast.For,)
LOOP_TYPES = (ast.While, ast.For)
+ FUNCTION_TYPES = (ast.FunctionDef,)
# https://github.com/python/typed_ast/blob/1.4.0/ast27/Parser/tokenizer.c#L102-L104
TYPE_COMMENT_RE = re.compile(r'^#\s*type:\s*')
@@ -642,7 +644,7 @@ def is_typing_overload(value, scope_stack):
)
return (
- isinstance(value.source, ast.FunctionDef) and
+ isinstance(value.source, FUNCTION_TYPES) and
any(
is_typing_overload_decorator(dec)
for dec in value.source.decorator_list
| @overload of async functions
Since #435, pyflakes git tip copes with synchronous @overload'ed functions, but not with async ones:
```python
from typing import overload
@overload
def f(x: int) -> int:
...
@overload
def f(x: float) -> float:
...
def f(x):
return x
@overload
async def g(x: int) -> int:
...
@overload
async def g(x: float) -> float:
...
async def g(x):
return x
```
```bash
$ python -m pyflakes .
18:1 redefinition of unused 'g' from line 14
22:1 redefinition of unused 'g' from line 18
``` | PyCQA/pyflakes | diff --git a/pyflakes/test/test_type_annotations.py b/pyflakes/test/test_type_annotations.py
index 676a4a5..045c57a 100644
--- a/pyflakes/test/test_type_annotations.py
+++ b/pyflakes/test/test_type_annotations.py
@@ -56,6 +56,24 @@ class TestTypeAnnotations(TestCase):
return s
""")
+ @skipIf(version_info < (3, 5), 'new in Python 3.5')
+ def test_typingOverloadAsync(self):
+ """Allow intentional redefinitions via @typing.overload (async)"""
+ self.flakes("""
+ from typing import overload
+
+ @overload
+ async def f(s): # type: (None) -> None
+ pass
+
+ @overload
+ async def f(s): # type: (int) -> int
+ pass
+
+ async def f(s):
+ return s
+ """)
+
def test_overload_with_multiple_decorators(self):
self.flakes("""
from typing import overload
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 2.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"flake8",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
flake8==7.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
mccabe==0.7.0
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pycodestyle==2.13.0
-e git+https://github.com/PyCQA/pyflakes.git@5ed30a6b9e4b9c1bb4042e5c5b3b506e52133da4#egg=pyflakes
pytest @ file:///croot/pytest_1738938843180/work
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: pyflakes
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- flake8==7.2.0
- mccabe==0.7.0
- pycodestyle==2.13.0
prefix: /opt/conda/envs/pyflakes
| [
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typingOverloadAsync"
] | [] | [
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_annotated_async_def",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_not_a_typing_overload",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_overload_in_class",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_overload_with_multiple_decorators",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_postponed_annotations",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_return_annotation_is_class_scope_variable",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_return_annotation_is_function_body_variable",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsAdditionalComemnt",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsAssignedToPreviousNode",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsFullSignature",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsFullSignatureWithDocstring",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsInvalidDoesNotMarkAsUsed",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsMarkImportsAsUsed",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsNoWhitespaceAnnotation",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsStarArgs",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsSyntaxError",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeCommentsSyntaxErrorCorrectLine",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeIgnore",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeIgnoreBogus",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typeIgnoreBogusUnicode",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typingExtensionsOverload",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_typingOverload",
"pyflakes/test/test_type_annotations.py::TestTypeAnnotations::test_variable_annotations"
] | [] | MIT License | 5,455 | 281 | [
"pyflakes/checker.py"
] |
cloud-custodian__cloud-custodian-4846 | 91f2016bfe3a595ec2b0a73695ebce8ad962175c | 2019-09-23 13:39:59 | 823befc421c8298c4d151c2ef86aa30a03039594 | kapilt: simplified and added backfilled a unit test
kapilt: docker build failure was unrelated, due to a python cryptography release without wheels/binaries. | diff --git a/c7n/resources/awslambda.py b/c7n/resources/awslambda.py
index be593671e..be91fdaa4 100644
--- a/c7n/resources/awslambda.py
+++ b/c7n/resources/awslambda.py
@@ -21,6 +21,7 @@ from concurrent.futures import as_completed
from c7n.actions import BaseAction, RemovePolicyBase, ModifyVpcSecurityGroupsAction
from c7n.filters import CrossAccountAccessFilter, ValueFilter
+from c7n.filters.kms import KmsRelatedFilter
import c7n.filters.vpc as net_filters
from c7n.manager import resources
from c7n import query
@@ -257,6 +258,28 @@ class LambdaCrossAccountAccessFilter(CrossAccountAccessFilter):
resources, event)
[email protected]_registry.register('kms-key')
+class KmsFilter(KmsRelatedFilter):
+ """
+ Filter a resource by its associcated kms key and optionally the aliasname
+ of the kms key by using 'c7n:AliasName'
+
+ :example:
+
+ .. code-block:: yaml
+
+ policies:
+ - name: lambda-kms-key-filters
+ resource: aws.lambda
+ filters:
+ - type: kms-key
+ key: c7n:AliasName
+ value: "^(alias/aws/lambda)"
+ op: regex
+ """
+ RelatedIdsExpression = 'KMSKeyArn'
+
+
@AWSLambda.action_registry.register('remove-statements')
class RemovePolicyStatement(RemovePolicyBase):
"""Action to remove policy/permission statements from lambda functions.
diff --git a/c7n/resources/vpc.py b/c7n/resources/vpc.py
index c7a2adf1c..cea0d2885 100644
--- a/c7n/resources/vpc.py
+++ b/c7n/resources/vpc.py
@@ -688,7 +688,7 @@ class SGUsage(Filter):
return list(itertools.chain(
*[self.manager.get_resource_manager(m).get_permissions()
for m in
- ['lambda', 'eni', 'launch-config', 'security-group']]))
+ ['lambda', 'eni', 'launch-config', 'security-group', 'event-rule-target']]))
def filter_peered_refs(self, resources):
if not resources:
@@ -705,14 +705,18 @@ class SGUsage(Filter):
"%d of %d groups w/ peered refs", len(peered_ids), len(resources))
return [r for r in resources if r['GroupId'] not in peered_ids]
+ def get_scanners(self):
+ return (
+ ("nics", self.get_eni_sgs),
+ ("sg-perm-refs", self.get_sg_refs),
+ ('lambdas', self.get_lambda_sgs),
+ ("launch-configs", self.get_launch_config_sgs),
+ ("ecs-cwe", self.get_ecs_cwe_sgs)
+ )
+
def scan_groups(self):
used = set()
- for kind, scanner in (
- ("nics", self.get_eni_sgs),
- ("sg-perm-refs", self.get_sg_refs),
- ('lambdas', self.get_lambda_sgs),
- ("launch-configs", self.get_launch_config_sgs),
- ):
+ for kind, scanner in self.get_scanners():
sg_ids = scanner()
new_refs = sg_ids.difference(used)
used = used.union(sg_ids)
@@ -735,7 +739,7 @@ class SGUsage(Filter):
def get_lambda_sgs(self):
sg_ids = set()
- for func in self.manager.get_resource_manager('lambda').resources():
+ for func in self.manager.get_resource_manager('lambda').resources(augment=False):
if 'VpcConfig' not in func:
continue
for g in func['VpcConfig']['SecurityGroupIds']:
@@ -758,6 +762,16 @@ class SGUsage(Filter):
sg_ids.add(g['GroupId'])
return sg_ids
+ def get_ecs_cwe_sgs(self):
+ sg_ids = set()
+ expr = jmespath.compile(
+ 'EcsParameters.NetworkConfiguration.awsvpcConfiguration.SecurityGroups[]')
+ for rule in self.manager.get_resource_manager('event-rule-target').resources():
+ ids = expr.search(rule)
+ if ids:
+ sg_ids.update(ids)
+ return sg_ids
+
@SecurityGroup.filter_registry.register('unused')
class UnusedSecurityGroup(SGUsage):
@@ -768,7 +782,10 @@ class UnusedSecurityGroup(SGUsage):
lambdas as they may not have extant resources in the vpc at a
given moment. We also find any security group with references from
other security group either within the vpc or across peered
- connections.
+ connections. Also checks cloud watch event targeting ecs.
+
+ Checks - enis, lambda, launch-configs, sg rule refs, and ecs cwe
+ targets.
Note this filter does not support classic security groups atm.
@@ -781,6 +798,7 @@ class UnusedSecurityGroup(SGUsage):
resource: security-group
filters:
- unused
+
"""
schema = type_schema('unused')
| aws - VPC Security Group filters "used" / "unused" and ECS Scheduled Tasks
I just found that those filters were not taking into account the ECS Scheduled Tasks (which do have a Security Group info).
I'm preparing a Pull Request with a proposed workaround. | cloud-custodian/cloud-custodian | diff --git a/tests/data/placebo/test_lambda_kms_key_filter/kms.DescribeKey_1.json b/tests/data/placebo/test_lambda_kms_key_filter/kms.DescribeKey_1.json
new file mode 100644
index 000000000..dc406d831
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/kms.DescribeKey_1.json
@@ -0,0 +1,31 @@
+{
+ "status_code": 200,
+ "data": {
+ "KeyMetadata": {
+ "AWSAccountId": "644160558196",
+ "KeyId": "36812ccf-daaf-49b1-a24b-0eef254ffe41",
+ "Arn": "arn:aws:kms:us-east-1:644160558196:key/36812ccf-daaf-49b1-a24b-0eef254ffe41",
+ "CreationDate": {
+ "__class__": "datetime",
+ "year": 2016,
+ "month": 7,
+ "day": 24,
+ "hour": 4,
+ "minute": 51,
+ "second": 26,
+ "microsecond": 199000
+ },
+ "Enabled": true,
+ "Description": "Trails for the Skunk",
+ "KeyUsage": "ENCRYPT_DECRYPT",
+ "KeyState": "Enabled",
+ "Origin": "AWS_KMS",
+ "KeyManager": "CUSTOMER",
+ "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
+ "EncryptionAlgorithms": [
+ "SYMMETRIC_DEFAULT"
+ ]
+ },
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_1.json b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_1.json
new file mode 100644
index 000000000..35baf402e
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_1.json
@@ -0,0 +1,14 @@
+{
+ "status_code": 200,
+ "data": {
+ "Aliases": [
+ {
+ "AliasName": "alias/skunk/trails",
+ "AliasArn": "arn:aws:kms:us-east-1:644160558196:alias/skunk/trails",
+ "TargetKeyId": "36812ccf-daaf-49b1-a24b-0eef254ffe41"
+ }
+ ],
+ "Truncated": false,
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_2.json b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_2.json
new file mode 100644
index 000000000..35baf402e
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListAliases_2.json
@@ -0,0 +1,14 @@
+{
+ "status_code": 200,
+ "data": {
+ "Aliases": [
+ {
+ "AliasName": "alias/skunk/trails",
+ "AliasArn": "arn:aws:kms:us-east-1:644160558196:alias/skunk/trails",
+ "TargetKeyId": "36812ccf-daaf-49b1-a24b-0eef254ffe41"
+ }
+ ],
+ "Truncated": false,
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_lambda_kms_key_filter/kms.ListKeys_1.json b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListKeys_1.json
new file mode 100644
index 000000000..05030d269
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/kms.ListKeys_1.json
@@ -0,0 +1,13 @@
+{
+ "status_code": 200,
+ "data": {
+ "Keys": [
+ {
+ "KeyId": "36812ccf-daaf-49b1-a24b-0eef254ffe41",
+ "KeyArn": "arn:aws:kms:us-east-1:644160558196:key/36812ccf-daaf-49b1-a24b-0eef254ffe41"
+ }
+ ],
+ "Truncated": false,
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_lambda_kms_key_filter/lambda.ListFunctions_1.json b/tests/data/placebo/test_lambda_kms_key_filter/lambda.ListFunctions_1.json
new file mode 100644
index 000000000..e0fe5a510
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/lambda.ListFunctions_1.json
@@ -0,0 +1,32 @@
+{
+ "status_code": 200,
+ "data": {
+ "ResponseMetadata": {},
+ "Functions": [
+ {
+ "FunctionName": "test",
+ "FunctionArn": "arn:aws:lambda:us-east-1:644160558196:function:test",
+ "Runtime": "nodejs12.x",
+ "Role": "arn:aws:iam::644160558196:role/sphere12",
+ "Handler": "index.handler",
+ "CodeSize": 304,
+ "Description": "",
+ "Timeout": 3,
+ "MemorySize": 128,
+ "LastModified": "2020-04-21T01:57:58.231+0000",
+ "CodeSha256": "k33TggGtygLrDDrEecfHuvfuVd+6bRxCFv7lErxWOJ0=",
+ "Version": "$LATEST",
+ "Environment": {
+ "Variables": {
+ "test": "test"
+ }
+ },
+ "KMSKeyArn": "arn:aws:kms:us-east-1:644160558196:key/36812ccf-daaf-49b1-a24b-0eef254ffe41",
+ "TracingConfig": {
+ "Mode": "PassThrough"
+ },
+ "RevisionId": "4ce18eaf-7e19-4446-b8b4-fe773a79379f"
+ }
+ ]
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_lambda_kms_key_filter/tagging.GetResources_1.json b/tests/data/placebo/test_lambda_kms_key_filter/tagging.GetResources_1.json
new file mode 100644
index 000000000..8b704d185
--- /dev/null
+++ b/tests/data/placebo/test_lambda_kms_key_filter/tagging.GetResources_1.json
@@ -0,0 +1,8 @@
+{
+ "status_code": 200,
+ "data": {
+ "PaginationToken": "",
+ "ResourceTagMappingList": [],
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_security_group_ecs_unused/config.SelectResourceConfig_1.json b/tests/data/placebo/test_security_group_ecs_unused/config.SelectResourceConfig_1.json
new file mode 100644
index 000000000..2ab8f39b6
--- /dev/null
+++ b/tests/data/placebo/test_security_group_ecs_unused/config.SelectResourceConfig_1.json
@@ -0,0 +1,19 @@
+{
+ "status_code": 200,
+ "data": {
+ "Results": [
+ "{\"configuration\":{\"groupName\":\"apichanges-task\",\"groupId\":\"sg-0f026884bba48e350\",\"vpcId\":\"vpc-39831640\",\"ipPermissionsEgress\":[{\"ipRanges\":[\"0.0.0.0/0\"],\"prefixListIds\":[],\"userIdGroupPairs\":[],\"ipProtocol\":\"-1\",\"ipv4Ranges\":[{\"description\":\"Allows task to establish connections to all resources\",\"cidrIp\":\"0.0.0.0/0\"}],\"ipv6Ranges\":[]}],\"description\":\"Limit connections from internal resources while allowing apichanges-task to connect to all external resources\",\"ownerId\":\"644160558196\",\"ipPermissions\":[],\"tags\":[{\"value\":\"AWSAPIChanges\",\"key\":\"App\"}]},\"supplementaryConfiguration\":{}}"
+ ],
+ "QueryInfo": {
+ "SelectFields": [
+ {
+ "Name": "configuration"
+ },
+ {
+ "Name": "supplementaryConfiguration"
+ }
+ ]
+ },
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_security_group_ecs_unused/events.ListRules_1.json b/tests/data/placebo/test_security_group_ecs_unused/events.ListRules_1.json
new file mode 100644
index 000000000..b49c5332e
--- /dev/null
+++ b/tests/data/placebo/test_security_group_ecs_unused/events.ListRules_1.json
@@ -0,0 +1,16 @@
+{
+ "status_code": 200,
+ "data": {
+ "Rules": [
+ {
+ "Name": "apichanges-build",
+ "Arn": "arn:aws:events:us-east-1:644160558196:rule/apichanges-build",
+ "State": "ENABLED",
+ "Description": "Runs fargate task apichanges: rate(6 hours)",
+ "ScheduleExpression": "rate(6 hours)",
+ "EventBusName": "default"
+ }
+ ],
+ "ResponseMetadata": {}
+ }
+}
diff --git a/tests/data/placebo/test_security_group_ecs_unused/events.ListTargetsByRule_1.json b/tests/data/placebo/test_security_group_ecs_unused/events.ListTargetsByRule_1.json
new file mode 100644
index 000000000..2c6247eca
--- /dev/null
+++ b/tests/data/placebo/test_security_group_ecs_unused/events.ListTargetsByRule_1.json
@@ -0,0 +1,31 @@
+{
+ "status_code": 200,
+ "data": {
+ "Targets": [
+ {
+ "Id": "apichanges-build-target",
+ "Arn": "arn:aws:ecs:us-east-1:644160558196:cluster/apichanges",
+ "RoleArn": "arn:aws:iam::644160558196:role/apichanges-events",
+ "Input": "{}",
+ "EcsParameters": {
+ "TaskDefinitionArn": "arn:aws:ecs:us-east-1:644160558196:task-definition/apichanges:6",
+ "TaskCount": 1,
+ "LaunchType": "FARGATE",
+ "NetworkConfiguration": {
+ "awsvpcConfiguration": {
+ "Subnets": [
+ "subnet-8e261cb2"
+ ],
+ "SecurityGroups": [
+ "sg-0f026884bba48e350"
+ ],
+ "AssignPublicIp": "ENABLED"
+ }
+ },
+ "PlatformVersion": "LATEST"
+ }
+ }
+ ],
+ "ResponseMetadata": {}
+ }
+}
\ No newline at end of file
diff --git a/tests/data/placebo/test_security_group_unused/events.ListRules_1.json b/tests/data/placebo/test_security_group_unused/events.ListRules_1.json
new file mode 100644
index 000000000..67342edda
--- /dev/null
+++ b/tests/data/placebo/test_security_group_unused/events.ListRules_1.json
@@ -0,0 +1,7 @@
+{
+ "status_code": 200,
+ "data": {
+ "Rules": [],
+ "ResponseMetadata": {}
+ }
+}
diff --git a/tests/data/placebo/test_security_group_used/events.ListRules_1.json b/tests/data/placebo/test_security_group_used/events.ListRules_1.json
new file mode 100644
index 000000000..67342edda
--- /dev/null
+++ b/tests/data/placebo/test_security_group_used/events.ListRules_1.json
@@ -0,0 +1,7 @@
+{
+ "status_code": 200,
+ "data": {
+ "Rules": [],
+ "ResponseMetadata": {}
+ }
+}
diff --git a/tests/test_lambda.py b/tests/test_lambda.py
index 6bc3f3f76..6c3df5f57 100644
--- a/tests/test_lambda.py
+++ b/tests/test_lambda.py
@@ -526,3 +526,28 @@ class TestModifyVpcSecurityGroupsAction(BaseTest):
groups = ['sg-12121212', 'sg-34343434']
updatefunc(FunctionName='badname', VpcConfig={'SecurityGroupIds': groups})
updatefunc.assert_called_once()
+
+ def test_lambda_kms_alias(self):
+ session_factory = self.replay_flight_data("test_lambda_kms_key_filter")
+ kms = session_factory().client('kms')
+ p = self.load_policy(
+ {
+ "name": "lambda-kms-alias",
+ "resource": "lambda",
+ "filters": [
+ {
+ 'FunctionName': "test"
+ },
+ {
+ "type": "kms-key",
+ "key": "c7n:AliasName",
+ "value": "alias/skunk/trails",
+ }
+ ]
+ },
+ session_factory=session_factory,
+ )
+ resources = p.run()
+ self.assertTrue(len(resources), 1)
+ aliases = kms.list_aliases(KeyId=resources[0]['KMSKeyArn'])
+ self.assertEqual(aliases['Aliases'][0]['AliasName'], 'alias/skunk/trails')
diff --git a/tests/test_vpc.py b/tests/test_vpc.py
index 64e0f43d8..4075f0bbe 100644
--- a/tests/test_vpc.py
+++ b/tests/test_vpc.py
@@ -1125,6 +1125,24 @@ class SecurityGroupTest(BaseTest):
set([r["GroupId"] for r in resources]),
)
+ def test_unused_ecs(self):
+ factory = self.replay_flight_data("test_security_group_ecs_unused")
+ p = self.load_policy(
+ {'name': 'sg-xyz',
+ 'source': 'config',
+ 'query': [
+ {'clause': "resourceId ='sg-0f026884bba48e350'"}],
+ 'resource': 'security-group',
+ 'filters': ['unused']},
+ session_factory=factory)
+ unused = p.resource_manager.filters[0]
+ self.patch(
+ unused,
+ 'get_scanners',
+ lambda: (('ecs-cwe', unused.get_ecs_cwe_sgs),))
+ resources = p.run()
+ assert resources == []
+
def test_unused(self):
factory = self.replay_flight_data("test_security_group_unused")
p = self.load_policy(
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 2
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-xdist pytest-mock",
"pytest"
],
"pre_install": null,
"python": "3.8",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | apipkg==1.5
appdirs==1.4.3
argcomplete==1.11.1
attrs==19.3.0
aws-xray-sdk==2.5.0
bleach==3.1.4
boto3==1.12.41
botocore==1.15.41
-e git+https://github.com/cloud-custodian/cloud-custodian.git@91f2016bfe3a595ec2b0a73695ebce8ad962175c#egg=c7n
certifi==2020.4.5.1
cffi==1.14.0
chardet==3.0.4
coverage==5.1
cryptography==2.9
distlib==0.3.0
docutils==0.15.2
entrypoints==0.3
exceptiongroup==1.2.2
execnet==1.7.1
filelock==3.0.12
flake8==3.7.9
future==0.18.2
idna==2.9
importlib-metadata==1.6.0
iniconfig==2.1.0
jeepney==0.4.3
jmespath==0.9.5
jsonpatch==1.25
jsonpickle==1.4
jsonpointer==2.0
jsonschema==3.2.0
keyring==21.2.0
mccabe==0.6.1
mock==4.0.2
more-itertools==8.2.0
multidict==4.7.5
packaging==20.3
pkginfo==1.5.0.1
placebo==0.9.0
pluggy==1.5.0
psutil==5.7.0
py==1.8.1
pycodestyle==2.5.0
pycparser==2.20
pyflakes==2.1.1
Pygments==2.6.1
pyparsing==2.4.7
pyrsistent==0.16.0
pytest==8.3.5
pytest-cov==2.8.1
pytest-forked==1.1.3
pytest-mock==3.14.0
pytest-sugar==0.9.2
pytest-xdist==1.31.0
python-dateutil==2.8.1
PyYAML==5.3.1
readme-renderer==25.0
requests==2.23.0
requests-toolbelt==0.9.1
s3transfer==0.3.3
SecretStorage==3.1.2
six==1.14.0
tabulate==0.8.7
termcolor==1.1.0
toml==0.10.0
tomli==2.2.1
tox==3.14.6
tqdm==4.45.0
twine==3.1.1
urllib3==1.25.9
vcrpy==4.0.2
virtualenv==20.0.18
wcwidth==0.1.9
webencodings==0.5.1
wrapt==1.12.1
yarl==1.4.2
zipp==3.1.0
| name: cloud-custodian
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=24.2=py38h06a4308_0
- python=3.8.20=he870216_0
- readline=8.2=h5eee18b_0
- setuptools=75.1.0=py38h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.44.0=py38h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- apipkg==1.5
- appdirs==1.4.3
- argcomplete==1.11.1
- attrs==19.3.0
- aws-xray-sdk==2.5.0
- bleach==3.1.4
- boto3==1.12.41
- botocore==1.15.41
- c7n==0.9.0
- certifi==2020.4.5.1
- cffi==1.14.0
- chardet==3.0.4
- coverage==5.1
- cryptography==2.9
- distlib==0.3.0
- docutils==0.15.2
- entrypoints==0.3
- exceptiongroup==1.2.2
- execnet==1.7.1
- filelock==3.0.12
- flake8==3.7.9
- future==0.18.2
- idna==2.9
- importlib-metadata==1.6.0
- iniconfig==2.1.0
- jeepney==0.4.3
- jmespath==0.9.5
- jsonpatch==1.25
- jsonpickle==1.4
- jsonpointer==2.0
- jsonschema==3.2.0
- keyring==21.2.0
- mccabe==0.6.1
- mock==4.0.2
- more-itertools==8.2.0
- multidict==4.7.5
- packaging==20.3
- pkginfo==1.5.0.1
- placebo==0.9.0
- pluggy==1.5.0
- psutil==5.7.0
- py==1.8.1
- pycodestyle==2.5.0
- pycparser==2.20
- pyflakes==2.1.1
- pygments==2.6.1
- pyparsing==2.4.7
- pyrsistent==0.16.0
- pytest==8.3.5
- pytest-cov==2.8.1
- pytest-forked==1.1.3
- pytest-mock==3.14.0
- pytest-sugar==0.9.2
- pytest-xdist==1.31.0
- python-dateutil==2.8.1
- pyyaml==5.3.1
- readme-renderer==25.0
- requests==2.23.0
- requests-toolbelt==0.9.1
- s3transfer==0.3.3
- secretstorage==3.1.2
- six==1.14.0
- tabulate==0.8.7
- termcolor==1.1.0
- toml==0.10.0
- tomli==2.2.1
- tox==3.14.6
- tqdm==4.45.0
- twine==3.1.1
- urllib3==1.25.9
- vcrpy==4.0.2
- virtualenv==20.0.18
- wcwidth==0.1.9
- webencodings==0.5.1
- wrapt==1.12.1
- yarl==1.4.2
- zipp==3.1.0
prefix: /opt/conda/envs/cloud-custodian
| [
"tests/test_lambda.py::TestModifyVpcSecurityGroupsAction::test_lambda_kms_alias",
"tests/test_vpc.py::SecurityGroupTest::test_unused_ecs"
] | [] | [
"tests/test_lambda.py::LambdaPermissionTest::test_lambda_permission_matched",
"tests/test_lambda.py::LambdaPermissionTest::test_lambda_permission_named",
"tests/test_lambda.py::LambdaLayerTest::test_delete_layer",
"tests/test_lambda.py::LambdaLayerTest::test_lambda_layer_cross_account",
"tests/test_lambda.py::LambdaTest::test_delete",
"tests/test_lambda.py::LambdaTest::test_delete_reserved_concurrency",
"tests/test_lambda.py::LambdaTest::test_event_source",
"tests/test_lambda.py::LambdaTest::test_lambda_check_permission",
"tests/test_lambda.py::LambdaTest::test_lambda_config_source",
"tests/test_lambda.py::LambdaTest::test_set_expr_concurrency",
"tests/test_lambda.py::LambdaTest::test_set_filter_concurrency",
"tests/test_lambda.py::LambdaTest::test_sg_filter",
"tests/test_lambda.py::LambdaTagTest::test_lambda_tag_and_remove",
"tests/test_lambda.py::LambdaTagTest::test_mark_and_match",
"tests/test_lambda.py::TestModifyVpcSecurityGroupsAction::test_lambda_add_security_group",
"tests/test_lambda.py::TestModifyVpcSecurityGroupsAction::test_lambda_notfound_exception",
"tests/test_lambda.py::TestModifyVpcSecurityGroupsAction::test_lambda_remove_matched_security_groups",
"tests/test_lambda.py::TestModifyVpcSecurityGroupsAction::test_nonvpc_function",
"tests/test_vpc.py::VpcTest::test_attributes_filter_all",
"tests/test_vpc.py::VpcTest::test_attributes_filter_hostnames",
"tests/test_vpc.py::VpcTest::test_dhcp_options_filter",
"tests/test_vpc.py::VpcTest::test_eni_vpc_filter",
"tests/test_vpc.py::VpcTest::test_flow_logs",
"tests/test_vpc.py::VpcTest::test_flow_logs_absent",
"tests/test_vpc.py::VpcTest::test_flow_logs_misconfiguration",
"tests/test_vpc.py::VpcTest::test_flow_logs_s3_destination",
"tests/test_vpc.py::VpcTest::test_vpc_post_finding",
"tests/test_vpc.py::NetworkLocationTest::test_network_location_ignore",
"tests/test_vpc.py::NetworkLocationTest::test_network_location_resource_missing",
"tests/test_vpc.py::NetworkLocationTest::test_network_location_sg_cardinality",
"tests/test_vpc.py::NetworkLocationTest::test_network_location_sg_missing",
"tests/test_vpc.py::NetworkLocationTest::test_network_location_triple_intersect",
"tests/test_vpc.py::NetworkAclTest::test_s3_cidr_network_acl_not_present",
"tests/test_vpc.py::NetworkAclTest::test_s3_cidr_network_acl_present",
"tests/test_vpc.py::TransitGatewayTest::test_tgw_attachment",
"tests/test_vpc.py::TransitGatewayTest::test_tgw_query",
"tests/test_vpc.py::NetworkInterfaceTest::test_and_or_nest",
"tests/test_vpc.py::NetworkInterfaceTest::test_interface_delete",
"tests/test_vpc.py::NetworkInterfaceTest::test_interface_subnet",
"tests/test_vpc.py::NetworkAddrTest::test_elasticip_alias",
"tests/test_vpc.py::NetworkAddrTest::test_norelease_attached_nif",
"tests/test_vpc.py::NetworkAddrTest::test_release_attached_ec2",
"tests/test_vpc.py::NetworkAddrTest::test_release_attached_nif",
"tests/test_vpc.py::NetworkAddrTest::test_release_detached_vpc",
"tests/test_vpc.py::RouteTableTest::test_rt_route_filter",
"tests/test_vpc.py::RouteTableTest::test_rt_subnet_filter",
"tests/test_vpc.py::PeeringConnectionTest::test_peer_cross_account",
"tests/test_vpc.py::PeeringConnectionTest::test_peer_missing_not_found",
"tests/test_vpc.py::PeeringConnectionTest::test_peer_missing_one_route",
"tests/test_vpc.py::PeeringConnectionTest::test_peer_missing_route",
"tests/test_vpc.py::SecurityGroupTest::test_cidr_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_cidr_size_egress",
"tests/test_vpc.py::SecurityGroupTest::test_config_rule",
"tests/test_vpc.py::SecurityGroupTest::test_config_source",
"tests/test_vpc.py::SecurityGroupTest::test_default_vpc",
"tests/test_vpc.py::SecurityGroupTest::test_description_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_egress_ipv6",
"tests/test_vpc.py::SecurityGroupTest::test_egress_validation_error",
"tests/test_vpc.py::SecurityGroupTest::test_id_selector",
"tests/test_vpc.py::SecurityGroupTest::test_ingress_remove",
"tests/test_vpc.py::SecurityGroupTest::test_match_resource_validator",
"tests/test_vpc.py::SecurityGroupTest::test_multi_attribute_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_only_ports",
"tests/test_vpc.py::SecurityGroupTest::test_only_ports_and_cidr_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_only_ports_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_permission_cidr_kv",
"tests/test_vpc.py::SecurityGroupTest::test_permission_expansion",
"tests/test_vpc.py::SecurityGroupTest::test_port_within_range",
"tests/test_vpc.py::SecurityGroupTest::test_ports_ingress",
"tests/test_vpc.py::SecurityGroupTest::test_security_group_delete",
"tests/test_vpc.py::SecurityGroupTest::test_security_group_post_finding",
"tests/test_vpc.py::SecurityGroupTest::test_self_reference_ingress_false_positives",
"tests/test_vpc.py::SecurityGroupTest::test_self_reference_once",
"tests/test_vpc.py::SecurityGroupTest::test_stale",
"tests/test_vpc.py::SecurityGroupTest::test_unused",
"tests/test_vpc.py::SecurityGroupTest::test_used",
"tests/test_vpc.py::SecurityGroupTest::test_vpc_by_internet_gateway",
"tests/test_vpc.py::SecurityGroupTest::test_vpc_by_nat_gateway",
"tests/test_vpc.py::SecurityGroupTest::test_vpc_by_security_group",
"tests/test_vpc.py::SecurityGroupTest::test_vpc_by_subnet",
"tests/test_vpc.py::SecurityGroupTest::test_vpc_scenario_2",
"tests/test_vpc.py::EndpointTest::test_endpoint_cross_account",
"tests/test_vpc.py::EndpointTest::test_endpoint_sg",
"tests/test_vpc.py::EndpointTest::test_endpoint_subnet",
"tests/test_vpc.py::EndpointTest::test_set_permission",
"tests/test_vpc.py::NATGatewayTest::test_delete_nat_gateways",
"tests/test_vpc.py::NATGatewayTest::test_query_nat_gateways",
"tests/test_vpc.py::NATGatewayTest::test_tag_nat_gateways",
"tests/test_vpc.py::FlowLogsTest::test_vpc_create_flow_logs",
"tests/test_vpc.py::FlowLogsTest::test_vpc_delete_flow_logs",
"tests/test_vpc.py::FlowLogsTest::test_vpc_flow_log_destination",
"tests/test_vpc.py::FlowLogsTest::test_vpc_set_flow_logs_maxaggrinterval",
"tests/test_vpc.py::FlowLogsTest::test_vpc_set_flow_logs_s3",
"tests/test_vpc.py::FlowLogsTest::test_vpc_set_flow_logs_validation"
] | [] | Apache License 2.0 | 5,456 | 1,220 | [
"c7n/resources/awslambda.py",
"c7n/resources/vpc.py"
] |
pre-commit__pre-commit-1145 | 96c35185f07925ff10a04e3d161c96baf39eaa28 | 2019-09-23 18:20:35 | 4bd6529c0521955265bacdbb4d6ef7c2ceec8eba | diff --git a/pre_commit/commands/init_templatedir.py b/pre_commit/commands/init_templatedir.py
index 6e8df18..74a32f2 100644
--- a/pre_commit/commands/init_templatedir.py
+++ b/pre_commit/commands/init_templatedir.py
@@ -8,10 +8,10 @@ from pre_commit.util import cmd_output
logger = logging.getLogger('pre_commit')
-def init_templatedir(config_file, store, directory, hook_type):
+def init_templatedir(config_file, store, directory, hook_types):
install(
- config_file, store, overwrite=True, hook_type=hook_type,
- skip_on_missing_config=True, git_dir=directory,
+ config_file, store, hook_types=hook_types,
+ overwrite=True, skip_on_missing_config=True, git_dir=directory,
)
try:
_, out, _ = cmd_output('git', 'config', 'init.templateDir')
diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
index 9b2c3b8..0fda627 100644
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -67,19 +67,10 @@ def shebang():
return '#!/usr/bin/env {}'.format(py)
-def install(
- config_file, store,
- overwrite=False, hooks=False, hook_type='pre-commit',
- skip_on_missing_config=False, git_dir=None,
+def _install_hook_script(
+ config_file, hook_type,
+ overwrite=False, skip_on_missing_config=False, git_dir=None,
):
- """Install the pre-commit hooks."""
- if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
- logger.error(
- 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
- 'hint: `git config --unset-all core.hooksPath`',
- )
- return 1
-
hook_path, legacy_path = _hook_paths(hook_type, git_dir=git_dir)
mkdirp(os.path.dirname(hook_path))
@@ -120,7 +111,27 @@ def install(
output.write_line('pre-commit installed at {}'.format(hook_path))
- # If they requested we install all of the hooks, do so.
+
+def install(
+ config_file, store, hook_types,
+ overwrite=False, hooks=False,
+ skip_on_missing_config=False, git_dir=None,
+):
+ if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
+ logger.error(
+ 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
+ 'hint: `git config --unset-all core.hooksPath`',
+ )
+ return 1
+
+ for hook_type in hook_types:
+ _install_hook_script(
+ config_file, hook_type,
+ overwrite=overwrite,
+ skip_on_missing_config=skip_on_missing_config,
+ git_dir=git_dir,
+ )
+
if hooks:
install_hooks(config_file, store)
@@ -131,13 +142,12 @@ def install_hooks(config_file, store):
install_hook_envs(all_hooks(load_config(config_file), store), store)
-def uninstall(hook_type='pre-commit'):
- """Uninstall the pre-commit hooks."""
+def _uninstall_hook_script(hook_type): # type: (str) -> None
hook_path, legacy_path = _hook_paths(hook_type)
# If our file doesn't exist or it isn't ours, gtfo.
if not os.path.exists(hook_path) or not is_our_script(hook_path):
- return 0
+ return
os.remove(hook_path)
output.write_line('{} uninstalled'.format(hook_type))
@@ -146,4 +156,8 @@ def uninstall(hook_type='pre-commit'):
os.rename(legacy_path, hook_path)
output.write_line('Restored previous hooks to {}'.format(hook_path))
+
+def uninstall(hook_types):
+ for hook_type in hook_types:
+ _uninstall_hook_script(hook_type)
return 0
diff --git a/pre_commit/main.py b/pre_commit/main.py
index dbfbecf..8d2d630 100644
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -60,7 +60,8 @@ def _add_hook_type_option(parser):
'-t', '--hook-type', choices=(
'pre-commit', 'pre-push', 'prepare-commit-msg', 'commit-msg',
),
- default='pre-commit',
+ action='append',
+ dest='hook_types',
)
@@ -120,6 +121,11 @@ def _adjust_args_and_chdir(args):
args.files = [os.path.relpath(filename) for filename in args.files]
if args.command == 'try-repo' and os.path.exists(args.repo):
args.repo = os.path.relpath(args.repo)
+ if (
+ args.command in {'install', 'uninstall', 'init-templatedir'} and
+ not args.hook_types
+ ):
+ args.hook_types = ['pre-commit']
def main(argv=None):
@@ -299,14 +305,14 @@ def main(argv=None):
elif args.command == 'install':
return install(
args.config, store,
+ hook_types=args.hook_types,
overwrite=args.overwrite, hooks=args.install_hooks,
- hook_type=args.hook_type,
skip_on_missing_config=args.allow_missing_config,
)
elif args.command == 'init-templatedir':
return init_templatedir(
- args.config, store,
- args.directory, hook_type=args.hook_type,
+ args.config, store, args.directory,
+ hook_types=args.hook_types,
)
elif args.command == 'install-hooks':
return install_hooks(args.config, store)
@@ -319,7 +325,7 @@ def main(argv=None):
elif args.command == 'try-repo':
return try_repo(args)
elif args.command == 'uninstall':
- return uninstall(hook_type=args.hook_type)
+ return uninstall(hook_types=args.hook_types)
else:
raise NotImplementedError(
'Command {} not implemented.'.format(args.command),
| Install many hooks types by one command
Now I should run next in every new repo:
```console
$ pre-commit install-hooks
$ pre-commit install --install-hooks --hook-type commit-msg
```
Maybe can be more pretty have something like
```console
$ pre-commit install --install-hooks --all
# OR
$ pre-commit install --install-hooks --hook-type pre-commit,commit-msg
``` | pre-commit/pre-commit | diff --git a/tests/commands/init_templatedir_test.py b/tests/commands/init_templatedir_test.py
index b94de99..1bb9695 100644
--- a/tests/commands/init_templatedir_test.py
+++ b/tests/commands/init_templatedir_test.py
@@ -16,7 +16,7 @@ from testing.util import git_commit
def test_init_templatedir(tmpdir, tempdir_factory, store, cap_out):
target = str(tmpdir.join('tmpl'))
- init_templatedir(C.CONFIG_FILE, store, target, hook_type='pre-commit')
+ init_templatedir(C.CONFIG_FILE, store, target, hook_types=['pre-commit'])
lines = cap_out.get().splitlines()
assert lines[0].startswith('pre-commit installed at ')
assert lines[1] == (
@@ -45,7 +45,9 @@ def test_init_templatedir_already_set(tmpdir, tempdir_factory, store, cap_out):
tmp_git_dir = git_dir(tempdir_factory)
with cwd(tmp_git_dir):
cmd_output('git', 'config', 'init.templateDir', target)
- init_templatedir(C.CONFIG_FILE, store, target, hook_type='pre-commit')
+ init_templatedir(
+ C.CONFIG_FILE, store, target, hook_types=['pre-commit'],
+ )
lines = cap_out.get().splitlines()
assert len(lines) == 1
@@ -57,7 +59,9 @@ def test_init_templatedir_not_set(tmpdir, store, cap_out):
with envcontext([('HOME', str(tmpdir))]):
with tmpdir.join('tmpl').ensure_dir().as_cwd():
# we have not set init.templateDir so this should produce a warning
- init_templatedir(C.CONFIG_FILE, store, '.', hook_type='pre-commit')
+ init_templatedir(
+ C.CONFIG_FILE, store, '.', hook_types=['pre-commit'],
+ )
lines = cap_out.get().splitlines()
assert len(lines) == 3
@@ -73,7 +77,7 @@ def test_init_templatedir_expanduser(tmpdir, tempdir_factory, store, cap_out):
cmd_output('git', 'config', 'init.templateDir', '~/templatedir')
with mock.patch.object(os.path, 'expanduser', return_value=target):
init_templatedir(
- C.CONFIG_FILE, store, target, hook_type='pre-commit',
+ C.CONFIG_FILE, store, target, hook_types=['pre-commit'],
)
lines = cap_out.get().splitlines()
diff --git a/tests/commands/install_uninstall_test.py b/tests/commands/install_uninstall_test.py
index 913bf74..52f6e4e 100644
--- a/tests/commands/install_uninstall_test.py
+++ b/tests/commands/install_uninstall_test.py
@@ -67,10 +67,10 @@ def test_shebang_posix_on_path(tmpdir):
def test_install_pre_commit(in_git_dir, store):
- assert not install(C.CONFIG_FILE, store)
+ assert not install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
assert os.access(in_git_dir.join('.git/hooks/pre-commit').strpath, os.X_OK)
- assert not install(C.CONFIG_FILE, store, hook_type='pre-push')
+ assert not install(C.CONFIG_FILE, store, hook_types=['pre-push'])
assert os.access(in_git_dir.join('.git/hooks/pre-push').strpath, os.X_OK)
@@ -78,32 +78,41 @@ def test_install_hooks_directory_not_present(in_git_dir, store):
# Simulate some git clients which don't make .git/hooks #234
if in_git_dir.join('.git/hooks').exists(): # pragma: no cover (odd git)
in_git_dir.join('.git/hooks').remove()
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
assert in_git_dir.join('.git/hooks/pre-commit').exists()
+def test_install_multiple_hooks_at_once(in_git_dir, store):
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit', 'pre-push'])
+ assert in_git_dir.join('.git/hooks/pre-commit').exists()
+ assert in_git_dir.join('.git/hooks/pre-push').exists()
+ uninstall(hook_types=['pre-commit', 'pre-push'])
+ assert not in_git_dir.join('.git/hooks/pre-commit').exists()
+ assert not in_git_dir.join('.git/hooks/pre-push').exists()
+
+
def test_install_refuses_core_hookspath(in_git_dir, store):
cmd_output('git', 'config', '--local', 'core.hooksPath', 'hooks')
- assert install(C.CONFIG_FILE, store)
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
@xfailif_no_symlink # pragma: windows no cover
def test_install_hooks_dead_symlink(in_git_dir, store):
hook = in_git_dir.join('.git/hooks').ensure_dir().join('pre-commit')
os.symlink('/fake/baz', hook.strpath)
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
assert hook.exists()
def test_uninstall_does_not_blow_up_when_not_there(in_git_dir):
- assert uninstall() == 0
+ assert uninstall(hook_types=['pre-commit']) == 0
def test_uninstall(in_git_dir, store):
assert not in_git_dir.join('.git/hooks/pre-commit').exists()
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
assert in_git_dir.join('.git/hooks/pre-commit').exists()
- uninstall()
+ uninstall(hook_types=['pre-commit'])
assert not in_git_dir.join('.git/hooks/pre-commit').exists()
@@ -142,7 +151,7 @@ NORMAL_PRE_COMMIT_RUN = re.compile(
def test_install_pre_commit_and_run(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -152,9 +161,9 @@ def test_install_pre_commit_and_run(tempdir_factory, store):
def test_install_pre_commit_and_run_custom_path(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- cmd_output('git', 'mv', C.CONFIG_FILE, 'custom-config.yaml')
+ cmd_output('git', 'mv', C.CONFIG_FILE, 'custom.yaml')
git_commit(cwd=path)
- assert install('custom-config.yaml', store) == 0
+ assert install('custom.yaml', store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -169,7 +178,7 @@ def test_install_in_submodule_and_run(tempdir_factory, store):
sub_pth = os.path.join(parent_path, 'sub')
with cwd(sub_pth):
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
assert NORMAL_PRE_COMMIT_RUN.match(output)
@@ -182,7 +191,7 @@ def test_install_in_worktree_and_run(tempdir_factory, store):
cmd_output('git', '-C', src_path, 'worktree', 'add', path, '-b', 'master')
with cwd(path):
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
assert NORMAL_PRE_COMMIT_RUN.match(output)
@@ -199,7 +208,7 @@ def test_commit_am(tempdir_factory, store):
with io.open('unstaged', 'w') as foo_file:
foo_file.write('Oh hai')
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -208,7 +217,7 @@ def test_commit_am(tempdir_factory, store):
def test_unicode_merge_commit_message(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
cmd_output('git', 'checkout', 'master', '-b', 'foo')
git_commit('-n', cwd=path)
cmd_output('git', 'checkout', 'master')
@@ -225,8 +234,8 @@ def test_unicode_merge_commit_message(tempdir_factory, store):
def test_install_idempotent(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- assert install(C.CONFIG_FILE, store) == 0
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -252,7 +261,7 @@ def test_environment_not_sourced(tempdir_factory, store):
with cwd(path):
# Patch the executable to simulate rming virtualenv
with mock.patch.object(sys, 'executable', '/does-not-exist'):
- assert install(C.CONFIG_FILE, store) == 0
+ assert not install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
# Use a specific homedir to ignore --user installs
homedir = tempdir_factory.get()
@@ -290,7 +299,7 @@ FAILING_PRE_COMMIT_RUN = re.compile(
def test_failing_hooks_returns_nonzero(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'failing_hook_repo')
with cwd(path):
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 1
@@ -323,7 +332,7 @@ def test_install_existing_hooks_no_overwrite(tempdir_factory, store):
assert EXISTING_COMMIT_RUN.match(output)
# Now install pre-commit (no-overwrite)
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
# We should run both the legacy and pre-commit hooks
ret, output = _get_commit_output(tempdir_factory)
@@ -336,10 +345,10 @@ def test_legacy_overwriting_legacy_hook(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
_write_legacy_hook(path)
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
_write_legacy_hook(path)
# this previously crashed on windows. See #1010
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
def test_install_existing_hook_no_overwrite_idempotent(tempdir_factory, store):
@@ -348,8 +357,8 @@ def test_install_existing_hook_no_overwrite_idempotent(tempdir_factory, store):
_write_legacy_hook(path)
# Install twice
- assert install(C.CONFIG_FILE, store) == 0
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
# We should run both the legacy and pre-commit hooks
ret, output = _get_commit_output(tempdir_factory)
@@ -374,7 +383,7 @@ def test_failing_existing_hook_returns_1(tempdir_factory, store):
f.write('#!/usr/bin/env bash\necho "fail!"\nexit 1\n')
make_executable(f.name)
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
# We should get a failure from the legacy hook
ret, output = _get_commit_output(tempdir_factory)
@@ -385,7 +394,9 @@ def test_failing_existing_hook_returns_1(tempdir_factory, store):
def test_install_overwrite_no_existing_hooks(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- assert install(C.CONFIG_FILE, store, overwrite=True) == 0
+ assert not install(
+ C.CONFIG_FILE, store, hook_types=['pre-commit'], overwrite=True,
+ )
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -396,7 +407,9 @@ def test_install_overwrite(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
_write_legacy_hook(path)
- assert install(C.CONFIG_FILE, store, overwrite=True) == 0
+ assert not install(
+ C.CONFIG_FILE, store, hook_types=['pre-commit'], overwrite=True,
+ )
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -409,8 +422,8 @@ def test_uninstall_restores_legacy_hooks(tempdir_factory, store):
_write_legacy_hook(path)
# Now install and uninstall pre-commit
- assert install(C.CONFIG_FILE, store) == 0
- assert uninstall() == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
+ assert uninstall(hook_types=['pre-commit']) == 0
# Make sure we installed the "old" hook correctly
ret, output = _get_commit_output(tempdir_factory, touch_file='baz')
@@ -433,7 +446,7 @@ def test_replace_old_commit_script(tempdir_factory, store):
make_executable(f.name)
# Install normally
- assert install(C.CONFIG_FILE, store) == 0
+ assert install(C.CONFIG_FILE, store, hook_types=['pre-commit']) == 0
ret, output = _get_commit_output(tempdir_factory)
assert ret == 0
@@ -445,7 +458,7 @@ def test_uninstall_doesnt_remove_not_our_hooks(in_git_dir):
pre_commit.write('#!/usr/bin/env bash\necho 1\n')
make_executable(pre_commit.strpath)
- assert uninstall() == 0
+ assert uninstall(hook_types=['pre-commit']) == 0
assert pre_commit.exists()
@@ -461,7 +474,7 @@ PRE_INSTALLED = re.compile(
def test_installs_hooks_with_hooks_True(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- install(C.CONFIG_FILE, store, hooks=True)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'], hooks=True)
ret, output = _get_commit_output(
tempdir_factory, pre_commit_home=store.directory,
)
@@ -473,7 +486,7 @@ def test_installs_hooks_with_hooks_True(tempdir_factory, store):
def test_install_hooks_command(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
install_hooks(C.CONFIG_FILE, store)
ret, output = _get_commit_output(
tempdir_factory, pre_commit_home=store.directory,
@@ -486,7 +499,7 @@ def test_install_hooks_command(tempdir_factory, store):
def test_installed_from_venv(tempdir_factory, store):
path = make_consuming_repo(tempdir_factory, 'script_hooks_repo')
with cwd(path):
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
# No environment so pre-commit is not on the path when running!
# Should still pick up the python from when we installed
ret, output = _get_commit_output(
@@ -525,7 +538,7 @@ def test_pre_push_integration_failing(tempdir_factory, store):
path = tempdir_factory.get()
cmd_output('git', 'clone', upstream, path)
with cwd(path):
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
# commit succeeds because pre-commit is only installed for pre-push
assert _get_commit_output(tempdir_factory)[0] == 0
assert _get_commit_output(tempdir_factory, touch_file='zzz')[0] == 0
@@ -543,7 +556,7 @@ def test_pre_push_integration_accepted(tempdir_factory, store):
path = tempdir_factory.get()
cmd_output('git', 'clone', upstream, path)
with cwd(path):
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
assert _get_commit_output(tempdir_factory)[0] == 0
retc, output = _get_push_output(tempdir_factory)
@@ -563,7 +576,7 @@ def test_pre_push_force_push_without_fetch(tempdir_factory, store):
assert _get_push_output(tempdir_factory)[0] == 0
with cwd(path2):
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
assert _get_commit_output(tempdir_factory, msg='force!')[0] == 0
retc, output = _get_push_output(tempdir_factory, opts=('--force',))
@@ -578,7 +591,7 @@ def test_pre_push_new_upstream(tempdir_factory, store):
path = tempdir_factory.get()
cmd_output('git', 'clone', upstream, path)
with cwd(path):
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
assert _get_commit_output(tempdir_factory)[0] == 0
cmd_output('git', 'remote', 'rename', 'origin', 'upstream')
@@ -594,7 +607,7 @@ def test_pre_push_integration_empty_push(tempdir_factory, store):
path = tempdir_factory.get()
cmd_output('git', 'clone', upstream, path)
with cwd(path):
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
_get_push_output(tempdir_factory)
retc, output = _get_push_output(tempdir_factory)
assert output == 'Everything up-to-date\n'
@@ -617,7 +630,7 @@ def test_pre_push_legacy(tempdir_factory, store):
)
make_executable(f.name)
- install(C.CONFIG_FILE, store, hook_type='pre-push')
+ install(C.CONFIG_FILE, store, hook_types=['pre-push'])
assert _get_commit_output(tempdir_factory)[0] == 0
retc, output = _get_push_output(tempdir_factory)
@@ -631,7 +644,7 @@ def test_pre_push_legacy(tempdir_factory, store):
def test_commit_msg_integration_failing(
commit_msg_repo, tempdir_factory, store,
):
- install(C.CONFIG_FILE, store, hook_type='commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['commit-msg'])
retc, out = _get_commit_output(tempdir_factory)
assert retc == 1
assert out.startswith('Must have "Signed off by:"...')
@@ -641,7 +654,7 @@ def test_commit_msg_integration_failing(
def test_commit_msg_integration_passing(
commit_msg_repo, tempdir_factory, store,
):
- install(C.CONFIG_FILE, store, hook_type='commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['commit-msg'])
msg = 'Hi\nSigned off by: me, lol'
retc, out = _get_commit_output(tempdir_factory, msg=msg)
assert retc == 0
@@ -662,7 +675,7 @@ def test_commit_msg_legacy(commit_msg_repo, tempdir_factory, store):
)
make_executable(hook_path)
- install(C.CONFIG_FILE, store, hook_type='commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['commit-msg'])
msg = 'Hi\nSigned off by: asottile'
retc, out = _get_commit_output(tempdir_factory, msg=msg)
@@ -675,7 +688,7 @@ def test_commit_msg_legacy(commit_msg_repo, tempdir_factory, store):
def test_prepare_commit_msg_integration_failing(
failing_prepare_commit_msg_repo, tempdir_factory, store,
):
- install(C.CONFIG_FILE, store, hook_type='prepare-commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['prepare-commit-msg'])
retc, out = _get_commit_output(tempdir_factory)
assert retc == 1
assert out.startswith('Add "Signed off by:"...')
@@ -685,7 +698,7 @@ def test_prepare_commit_msg_integration_failing(
def test_prepare_commit_msg_integration_passing(
prepare_commit_msg_repo, tempdir_factory, store,
):
- install(C.CONFIG_FILE, store, hook_type='prepare-commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['prepare-commit-msg'])
msg = 'Hi'
retc, out = _get_commit_output(tempdir_factory, msg=msg)
assert retc == 0
@@ -715,7 +728,7 @@ def test_prepare_commit_msg_legacy(
)
make_executable(hook_path)
- install(C.CONFIG_FILE, store, hook_type='prepare-commit-msg')
+ install(C.CONFIG_FILE, store, hook_types=['prepare-commit-msg'])
msg = 'Hi'
retc, out = _get_commit_output(tempdir_factory, msg=msg)
@@ -735,7 +748,8 @@ def test_install_disallow_missing_config(tempdir_factory, store):
with cwd(path):
remove_config_from_repo(path)
ret = install(
- C.CONFIG_FILE, store, overwrite=True, skip_on_missing_config=False,
+ C.CONFIG_FILE, store, hook_types=['pre-commit'],
+ overwrite=True, skip_on_missing_config=False,
)
assert ret == 0
@@ -748,7 +762,8 @@ def test_install_allow_missing_config(tempdir_factory, store):
with cwd(path):
remove_config_from_repo(path)
ret = install(
- C.CONFIG_FILE, store, overwrite=True, skip_on_missing_config=True,
+ C.CONFIG_FILE, store, hook_types=['pre-commit'],
+ overwrite=True, skip_on_missing_config=True,
)
assert ret == 0
@@ -766,7 +781,8 @@ def test_install_temporarily_allow_mising_config(tempdir_factory, store):
with cwd(path):
remove_config_from_repo(path)
ret = install(
- C.CONFIG_FILE, store, overwrite=True, skip_on_missing_config=False,
+ C.CONFIG_FILE, store, hook_types=['pre-commit'],
+ overwrite=True, skip_on_missing_config=False,
)
assert ret == 0
diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
index 49ce008..f6d5c93 100644
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -525,7 +525,7 @@ def test_stdout_write_bug_py26(repo_with_failing_hook, store, tempdir_factory):
config['repos'][0]['hooks'][0]['args'] = ['☃']
stage_a_file()
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
# Have to use subprocess because pytest monkeypatches sys.stdout
_, stdout, _ = git_commit(
@@ -555,7 +555,7 @@ def test_lots_of_files(store, tempdir_factory):
open(filename, 'w').close()
cmd_output('git', 'add', '.')
- install(C.CONFIG_FILE, store)
+ install(C.CONFIG_FILE, store, hook_types=['pre-commit'])
git_commit(
fn=cmd_output_mocked_pre_commit_home,
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 3
} | 1.18 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements-dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aspy.yaml==1.3.0
cfgv==3.4.0
coverage==7.8.0
distlib==0.3.9
exceptiongroup==1.2.2
filelock==3.18.0
identify==2.6.9
importlib_metadata==8.6.1
iniconfig==2.1.0
mock==5.2.0
nodeenv==1.9.1
packaging==24.2
platformdirs==4.3.7
pluggy==1.5.0
-e git+https://github.com/pre-commit/pre-commit.git@96c35185f07925ff10a04e3d161c96baf39eaa28#egg=pre_commit
pytest==8.3.5
pytest-env==1.1.5
PyYAML==6.0.2
six==1.17.0
toml==0.10.2
tomli==2.2.1
virtualenv==20.29.3
zipp==3.21.0
| name: pre-commit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aspy-yaml==1.3.0
- cfgv==3.4.0
- coverage==7.8.0
- distlib==0.3.9
- exceptiongroup==1.2.2
- filelock==3.18.0
- identify==2.6.9
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- mock==5.2.0
- nodeenv==1.9.1
- packaging==24.2
- platformdirs==4.3.7
- pluggy==1.5.0
- pytest==8.3.5
- pytest-env==1.1.5
- pyyaml==6.0.2
- six==1.17.0
- toml==0.10.2
- tomli==2.2.1
- virtualenv==20.29.3
- zipp==3.21.0
prefix: /opt/conda/envs/pre-commit
| [
"tests/commands/init_templatedir_test.py::test_init_templatedir",
"tests/commands/init_templatedir_test.py::test_init_templatedir_already_set",
"tests/commands/init_templatedir_test.py::test_init_templatedir_not_set",
"tests/commands/init_templatedir_test.py::test_init_templatedir_expanduser",
"tests/commands/install_uninstall_test.py::test_install_pre_commit",
"tests/commands/install_uninstall_test.py::test_install_hooks_directory_not_present",
"tests/commands/install_uninstall_test.py::test_install_multiple_hooks_at_once",
"tests/commands/install_uninstall_test.py::test_install_refuses_core_hookspath",
"tests/commands/install_uninstall_test.py::test_install_hooks_dead_symlink",
"tests/commands/install_uninstall_test.py::test_uninstall_does_not_blow_up_when_not_there",
"tests/commands/install_uninstall_test.py::test_uninstall",
"tests/commands/install_uninstall_test.py::test_install_pre_commit_and_run",
"tests/commands/install_uninstall_test.py::test_install_pre_commit_and_run_custom_path",
"tests/commands/install_uninstall_test.py::test_install_in_worktree_and_run",
"tests/commands/install_uninstall_test.py::test_commit_am",
"tests/commands/install_uninstall_test.py::test_unicode_merge_commit_message",
"tests/commands/install_uninstall_test.py::test_install_idempotent",
"tests/commands/install_uninstall_test.py::test_failing_hooks_returns_nonzero",
"tests/commands/install_uninstall_test.py::test_install_existing_hooks_no_overwrite",
"tests/commands/install_uninstall_test.py::test_legacy_overwriting_legacy_hook",
"tests/commands/install_uninstall_test.py::test_install_existing_hook_no_overwrite_idempotent",
"tests/commands/install_uninstall_test.py::test_failing_existing_hook_returns_1",
"tests/commands/install_uninstall_test.py::test_install_overwrite_no_existing_hooks",
"tests/commands/install_uninstall_test.py::test_install_overwrite",
"tests/commands/install_uninstall_test.py::test_uninstall_restores_legacy_hooks",
"tests/commands/install_uninstall_test.py::test_replace_old_commit_script",
"tests/commands/install_uninstall_test.py::test_uninstall_doesnt_remove_not_our_hooks",
"tests/commands/install_uninstall_test.py::test_installs_hooks_with_hooks_True",
"tests/commands/install_uninstall_test.py::test_install_hooks_command",
"tests/commands/install_uninstall_test.py::test_installed_from_venv",
"tests/commands/install_uninstall_test.py::test_pre_push_integration_failing",
"tests/commands/install_uninstall_test.py::test_pre_push_integration_accepted",
"tests/commands/install_uninstall_test.py::test_pre_push_force_push_without_fetch",
"tests/commands/install_uninstall_test.py::test_pre_push_new_upstream",
"tests/commands/install_uninstall_test.py::test_pre_push_integration_empty_push",
"tests/commands/install_uninstall_test.py::test_pre_push_legacy",
"tests/commands/install_uninstall_test.py::test_commit_msg_integration_failing",
"tests/commands/install_uninstall_test.py::test_commit_msg_integration_passing",
"tests/commands/install_uninstall_test.py::test_commit_msg_legacy",
"tests/commands/install_uninstall_test.py::test_prepare_commit_msg_integration_failing",
"tests/commands/install_uninstall_test.py::test_prepare_commit_msg_integration_passing",
"tests/commands/install_uninstall_test.py::test_prepare_commit_msg_legacy",
"tests/commands/install_uninstall_test.py::test_install_disallow_missing_config",
"tests/commands/install_uninstall_test.py::test_install_allow_missing_config",
"tests/commands/install_uninstall_test.py::test_install_temporarily_allow_mising_config",
"tests/commands/run_test.py::test_stdout_write_bug_py26",
"tests/commands/run_test.py::test_lots_of_files"
] | [
"tests/commands/install_uninstall_test.py::test_install_in_submodule_and_run",
"tests/commands/install_uninstall_test.py::test_environment_not_sourced"
] | [
"tests/commands/install_uninstall_test.py::test_is_not_script",
"tests/commands/install_uninstall_test.py::test_is_script",
"tests/commands/install_uninstall_test.py::test_is_previous_pre_commit",
"tests/commands/install_uninstall_test.py::test_shebang_windows",
"tests/commands/install_uninstall_test.py::test_shebang_posix_not_on_path",
"tests/commands/install_uninstall_test.py::test_shebang_posix_on_path",
"tests/commands/run_test.py::test_run[options3-outputs3-1-True]",
"tests/commands/run_test.py::test_run[options4-outputs4-1-True]",
"tests/commands/run_test.py::test_run[options7-outputs7-0-False]",
"tests/commands/run_test.py::test_origin_source_error_msg_error[master-]",
"tests/commands/run_test.py::test_origin_source_error_msg_error[-master]",
"tests/commands/run_test.py::test_origin_source_both_ok",
"tests/commands/run_test.py::test_has_unmerged_paths",
"tests/commands/run_test.py::test_merge_conflict",
"tests/commands/run_test.py::test_merge_conflict_modified",
"tests/commands/run_test.py::test_compute_cols[hooks0-True-80]",
"tests/commands/run_test.py::test_compute_cols[hooks1-False-81]",
"tests/commands/run_test.py::test_compute_cols[hooks2-True-85]",
"tests/commands/run_test.py::test_compute_cols[hooks3-False-82]",
"tests/commands/run_test.py::test_get_skips[environ0-expected_output0]",
"tests/commands/run_test.py::test_get_skips[environ1-expected_output1]",
"tests/commands/run_test.py::test_get_skips[environ2-expected_output2]",
"tests/commands/run_test.py::test_get_skips[environ3-expected_output3]",
"tests/commands/run_test.py::test_get_skips[environ4-expected_output4]",
"tests/commands/run_test.py::test_get_skips[environ5-expected_output5]",
"tests/commands/run_test.py::test_get_skips[environ6-expected_output6]",
"tests/commands/run_test.py::test_skip_hook",
"tests/commands/run_test.py::test_skip_aliased_hook",
"tests/commands/run_test.py::test_hook_id_not_in_non_verbose_output",
"tests/commands/run_test.py::test_hook_id_in_verbose_output",
"tests/commands/run_test.py::test_non_ascii_hook_id",
"tests/commands/run_test.py::test_stages",
"tests/commands/run_test.py::test_pcre_deprecation_warning",
"tests/commands/run_test.py::test_meta_hook_passes",
"tests/commands/run_test.py::test_error_with_unstaged_config",
"tests/commands/run_test.py::test_files_running_subdir",
"tests/commands/run_test.py::test_classifier_removes_dne",
"tests/commands/run_test.py::test_include_exclude_base_case",
"tests/commands/run_test.py::test_matches_broken_symlink",
"tests/commands/run_test.py::test_include_exclude_total_match",
"tests/commands/run_test.py::test_include_exclude_does_search_instead_of_match",
"tests/commands/run_test.py::test_include_exclude_exclude_removes_files",
"tests/commands/run_test.py::test_args_hook_only"
] | [] | MIT License | 5,458 | 1,473 | [
"pre_commit/commands/init_templatedir.py",
"pre_commit/commands/install_uninstall.py",
"pre_commit/main.py"
] |
|
pddg__uroboros-37 | 7532a7f3e3cdfe11764c04dc03361a4ee51e1b56 | 2019-09-25 04:12:21 | 73af69c558b7da184f561ded73d05d14095d44ce | diff --git a/uroboros/command.py b/uroboros/command.py
index b32f61c..2188031 100644
--- a/uroboros/command.py
+++ b/uroboros/command.py
@@ -226,7 +226,7 @@ class Command(metaclass=abc.ABCMeta):
Get all `Option` instance of this `Command`.
:return: List of Option instance
"""
- return self.options
+ return [opt() if type(opt) == type else opt for opt in self.options]
def print_help(self):
"""
| Operations on `options` in the inheritance destination class also affect the inheritance source.
Sample code is as follows.
```python
class Opt(uroboros.Option):
def build_option(self, parser):
parser.add_argument("--test", type=str)
return parser
class RootCommand(uroboros.Command):
name = 'root'
def run(self, args):
self.print_help()
return 0
class BaseCommand(uroboros.Command):
options = []
def run(self, args):
print(args.test)
return 0
class FirstCommand(BaseCommand):
name = 'first'
def get_options(self):
options = super(FirstCommand, self).get_options()
options.append(Opt())
return options
class SecondCommand(BaseCommand):
name = 'second'
root = RootCommand()
root.add_command(
FirstCommand(),
SecondCommand(),
)
root.execute()
```
## Expected
```sh
$ pipenv run python sample.py -h
usage: root [-h] {first,second} ...
optional arguments:
-h, --help show this help message and exit
Sub commands:
{first,second}
first
second
$pipenv run python sample.py first -h
usage: root first [-h] [--test TEST]
optional arguments:
-h, --help show this help message and exit
--test TEST
$pipenv run python sample.py second -h
usage: root second [-h]
optional arguments:
-h, --help show this help message and exit
```
## Actual
```
$ pipenv run python sample.py -h
Traceback (most recent call last):
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 57, in execute
self._check_initialized()
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 322, in _check_initialized
raise errors.CommandNotRegisteredError(self.name)
uroboros.errors.CommandNotRegisteredError: Command 'root' has not been registered yet.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 36, in <module>
root.execute()
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 59, in execute
self.initialize()
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 110, in initialize
self.initialize_sub_parsers(self._parser)
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 124, in initialize_sub_parsers
parents=[o.get_parser() for o in cmd.get_options()],
File "/home/pudding/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/command.py", line 124, in <listcomp>
parents=[o.get_parser() for o in cmd.get_options()],
File "/home/s-kokuryo/.local/share/virtualenvs/swat-analyzer-EWaBsfe_/lib/python3.7/site-packages/uroboros/option.py", line 15, in get_parser
return self.build_option(self.parser)
File "test.py", line 5, in build_option
parser.add_argument("--test", type=str)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1367, in add_argument
return self._add_action(action)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1730, in _add_action
self._optionals._add_action(action)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1571, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1381, in _add_action
self._check_conflict(action)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1520, in _check_conflict
conflict_handler(action, confl_optionals)
File "/home/pudding/.anyenv/envs/pyenv/versions/3.7.4/lib/python3.7/argparse.py", line 1529, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument --test: conflicting option string: --test
```
## Why this happened?
Python's list object is mutable. `uroboros.Option`'s `get_options()` function return the pointer of `options` attribute itself. So the operation on it in the inheritance class also affect its source class.
## How to fix
Change the implementation of `uroboros.Command.get_options` like [permission mechanism of django-restframework's view class](https://github.com/encode/django-rest-framework/blob/3.10.3/rest_framework/views.py#L266).
`uroboros.Command.options` accepts both of `uroboros.Option` class and its instance for backward compatibility. Check the type of the attribute of `uroboros.Command.options` and instantiate the class if need, and generate new list and return. | pddg/uroboros | diff --git a/tests/test_command.py b/tests/test_command.py
index a7eb5f0..05a703b 100644
--- a/tests/test_command.py
+++ b/tests/test_command.py
@@ -13,6 +13,18 @@ from .base import RootCommand, SecondCommand, ThirdCommand
def commands():
return RootCommand(), SecondCommand(), ThirdCommand()
+class Opt1(Option):
+ def build_option(self, parser):
+ return parser
+
+class Opt2(Option):
+ def build_option(self, parser):
+ return parser
+
+class Opt3(Option):
+ def build_option(self, parser):
+ return parser
+
def get_sub_commands(cmd_set):
if len(cmd_set) == 0:
@@ -338,3 +350,34 @@ class TestCommand(object):
cmd_parser = cmd.create_default_parser()
args = cmd_parser.parse_args(argv)
assert args.test == 'test'
+
+ @pytest.mark.parametrize(
+ 'option_objs', [
+ [Opt1(), Opt2(), Opt3()],
+ ]
+ )
+ def test_get_options(self, option_objs):
+ additional_opt = Opt1()
+ class SourceCommand(RootCommand):
+ options = option_objs
+ class Cmd(SourceCommand):
+ def get_options(self):
+ opts =super(Cmd, self).get_options()
+ opts.append(additional_opt)
+ return opts
+ root = SourceCommand()
+ source_opts = root.get_options()
+ cmd = Cmd()
+ actual_options = cmd.get_options()
+ expected_options = option_objs + [additional_opt]
+ assert len(actual_options) == len(expected_options)
+ # All options are instantiated
+ types = map(type, actual_options)
+ bools = map(lambda x: x != type, types)
+ assert all(bools)
+ # All class is correct
+ actual_classes = map(lambda x: type(x), actual_options)
+ expected_classes = map(lambda x: x if type(x) == type else type(x), expected_options)
+ assert list(actual_classes) == list(expected_classes)
+ # Inheritance source class is not modified
+ assert RootCommand().get_options() == []
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "Pipfile",
"pip_packages": [
"pytest",
"flake8",
"tox"
],
"pre_install": [
"pip install -U pip setuptools",
"pip install tox-travis"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
distlib==0.3.9
exceptiongroup==1.2.2
filelock==3.12.2
flake8==5.0.4
importlib-metadata==4.2.0
iniconfig==2.0.0
mccabe==0.7.0
packaging==24.0
pipfile==0.0.2
platformdirs==2.6.2
pluggy==1.2.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
pytest==7.4.4
six==1.17.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==2.0.1
tox==3.28.0
tox-travis==0.13
typing_extensions==4.7.1
-e git+https://github.com/pddg/uroboros.git@7532a7f3e3cdfe11764c04dc03361a4ee51e1b56#egg=uroboros
virtualenv==20.16.2
zipp==3.15.0
| name: uroboros
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pipfile=0.0.2=py_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- distlib==0.3.9
- exceptiongroup==1.2.2
- filelock==3.12.2
- flake8==5.0.4
- importlib-metadata==4.2.0
- iniconfig==2.0.0
- mccabe==0.7.0
- packaging==24.0
- pip==24.0
- platformdirs==2.6.2
- pluggy==1.2.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pytest==7.4.4
- setuptools==68.0.0
- six==1.17.0
- tomli==2.0.1
- tox==3.28.0
- tox-travis==0.13
- typing-extensions==4.7.1
- virtualenv==20.16.2
- zipp==3.15.0
prefix: /opt/conda/envs/uroboros
| [
"tests/test_command.py::TestCommand::test_get_options[option_objs0]"
] | [] | [
"tests/test_command.py::TestCommand::test_build_option[command0]",
"tests/test_command.py::TestCommand::test_build_option[command1]",
"tests/test_command.py::TestCommand::test_build_option[command2]",
"tests/test_command.py::TestCommand::test_validate[command0]",
"tests/test_command.py::TestCommand::test_validate[command1]",
"tests/test_command.py::TestCommand::test_validate[command2]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set0]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set1]",
"tests/test_command.py::TestCommand::test_increment_nest[command_set2]",
"tests/test_command.py::TestCommand::test_register_parent[command_set0]",
"tests/test_command.py::TestCommand::test_register_parent[command_set1]",
"tests/test_command.py::TestCommand::test_register_parent[command_set2]",
"tests/test_command.py::TestCommand::test_add_command[command_set0]",
"tests/test_command.py::TestCommand::test_add_command[command_set1]",
"tests/test_command.py::TestCommand::test_add_command[command_set2]",
"tests/test_command.py::TestCommand::test_multiple_add_command[root_cmd0-add_commands0]",
"tests/test_command.py::TestCommand::test_multiple_add_command[root_cmd1-add_commands1]",
"tests/test_command.py::TestCommand::test_add_others",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set0]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set1]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set2]",
"tests/test_command.py::TestCommand::test_get_all_sub_commands[command_set3]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set0-argv0]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set1-argv1]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set2-argv2]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set3-argv3]",
"tests/test_command.py::TestCommand::test_get_sub_commands[command_set4-argv4]",
"tests/test_command.py::TestCommand::test_add_duplicate_command[command_set0]",
"tests/test_command.py::TestCommand::test_add_duplicate_command[command_set1]",
"tests/test_command.py::TestCommand::test_execute[command_set0-root",
"tests/test_command.py::TestCommand::test_execute[command_set1-root",
"tests/test_command.py::TestCommand::test_execute[command_set2-root-False]",
"tests/test_command.py::TestCommand::test_execute[command_set3-root",
"tests/test_command.py::TestCommand::test_execute[command_set4-root",
"tests/test_command.py::TestCommand::test_execute[command_set5-root-False]",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set0-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set1-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set2-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set3-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set4-root",
"tests/test_command.py::TestCommand::test_violates_validation_argv[command_set5-root",
"tests/test_command.py::TestCommand::test_before_validate",
"tests/test_command.py::TestCommand::test_pre_hook[commands0]",
"tests/test_command.py::TestCommand::test_pre_hook[commands1]",
"tests/test_command.py::TestCommand::test_pre_hook[commands2]",
"tests/test_command.py::TestCommand::test_after_validate",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands0]",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands1]",
"tests/test_command.py::TestCommand::test_pre_hook_validated[commands2]",
"tests/test_command.py::TestCommand::test_create_default_parser"
] | [] | Apache License 2.0 | 5,464 | 143 | [
"uroboros/command.py"
] |
|
ClimateImpactLab__impactlab-tools-450 | a56fa03cfe65515c324bfebda79e7906e20ac87d | 2019-09-25 14:21:54 | 1031fb641ab8e81b2eec457d6950671193742dbd | diff --git a/impactlab_tools/utils/configdict.py b/impactlab_tools/utils/configdict.py
index cb9e5ec..135dfde 100644
--- a/impactlab_tools/utils/configdict.py
+++ b/impactlab_tools/utils/configdict.py
@@ -10,15 +10,19 @@ except ImportError:
import collections as collections_abc
-def gather_configtree(d):
+def gather_configtree(d, parse_lists=False):
"""Chains nested-dicts into a connected tree of ConfigDict(s)
Parameters
----------
- d : dict
+ d : dict or MutableMapping
Cast to :py:class:`ConfigDict`. Nested dicts within are also
recursively cast and assigned parents, reflecting their nested
structure.
+ parse_lists : bool, optional
+ If `d` or its children contain a list of dicts, do you want to convert
+ these listed dicts to ConfDicts and assign them parents. This is
+ slow. Note this only parses lists, strictly, not all Sequences.
Returns
-------
@@ -47,8 +51,17 @@ def gather_configtree(d):
for k, v in out.data.items():
# Replace nested maps with new ConfigDicts
if isinstance(v, collections_abc.MutableMapping):
- out.data[k] = gather_configtree(v)
+ out.data[k] = gather_configtree(v, parse_lists=parse_lists)
out.data[k].parent = out
+
+ # If list has mappings, replace mappings with new ConfigDicts
+ if parse_lists and isinstance(v, list):
+ for idx, item in enumerate(v):
+ if isinstance(item, collections_abc.MutableMapping):
+ cd = gather_configtree(item, parse_lists=parse_lists)
+ cd.parent = out
+ out.data[k][idx] = cd
+
return out
@@ -138,7 +151,7 @@ class ConfigDict(UserDict, object):
if isinstance(key, str):
return key.lower().replace('_', '-')
- def accessed_all_keys(self, search='local'):
+ def accessed_all_keys(self, search='local', parse_lists=False):
"""Were all the keys used in the config tree?
Parameters
@@ -156,6 +169,11 @@ class ConfigDict(UserDict, object):
``"children"``
Recursively check keys in children, moving down the tree, after
checking local keys.
+ parse_lists : bool, optional
+ If True when `search` is "children", check if self or its children
+ contain a list and check the list for ConfDicts and whether they
+ used their keys. This is slow. Note this only parses lists,
+ strictly, not all Sequences.
Returns
-------
@@ -215,17 +233,33 @@ class ConfigDict(UserDict, object):
# Recursively check parents keys, if any haven't been used,
# immediately return False.
if self.parent is not None:
- parent_used = self.parent.accessed_all_keys(search=search)
+ parent_used = self.parent.accessed_all_keys(
+ search=search,
+ parse_lists=parse_lists,
+ )
if parent_used is False:
return False
+
elif search == 'children':
# Recursively check children keys, if any haven't been used,
# immediately return False.
for k, v in self.data.items():
- # Assuming its faster to ask for forgiveness than to check
- # with `isinstance()` or `hasattr()q...
+ if parse_lists and isinstance(v, list):
+ for item in v:
+ try:
+ child_used = item.accessed_all_keys(
+ search=search,
+ parse_lists=parse_lists,
+ )
+ if child_used is False:
+ return False
+ except AttributeError:
+ continue
+ continue
+
try:
- child_used = v.accessed_all_keys(search=search)
+ child_used = v.accessed_all_keys(search=search,
+ parse_lists=parse_lists)
if child_used is False:
return False
except AttributeError:
| `gather_configtree()` needs option to assign parents to nested lists of dicts
`gather_configtree()` will skip over any nested lists of dictionaries. @jrising suggested allowing `gather_configtree()` to convert these nested dicts to `ConfDicts` and assigning them parents, as we would do for any other nested dictionaries.
The key here is that using the `accessed_all_keys()` method with `search='parents'` or `search='children'` from the top or bottom of the tree should not be "blocked" by a list of ConfDicts. This was a cause for trouble in the `impact-calculations` package. Be sure to test against this use case. | ClimateImpactLab/impactlab-tools | diff --git a/tests/utils/test_configdict.py b/tests/utils/test_configdict.py
index 1d299f0..c26edb8 100644
--- a/tests/utils/test_configdict.py
+++ b/tests/utils/test_configdict.py
@@ -8,6 +8,18 @@ def simple_nested_tree():
return {'a': 1, 'b': {'a': 2}, 'c': 3, 'd-4': 4, 'e_5': 5, 'F': 6}
+def test_gather_configtree_nested_lists():
+ """Test gather_configtree() "parse_lists" option"""
+ d = {'a': 1, 'c': [2, {'x': 'foo', 'y': {'z': 'bar'}}]}
+ conf = gather_configtree(d, parse_lists=True)
+ assert conf['c'][1]['y']['a'] == conf['a']
+
+ assert conf.accessed_all_keys(search='children', parse_lists=True) is False
+ conf['c'][1]['x']
+ conf['c'][1]['y']['z']
+ assert conf.accessed_all_keys(search='children', parse_lists=True) is True
+
+
def test_configdict_climbs_tree(simple_nested_tree):
conf = gather_configtree(simple_nested_tree)
assert conf['b']['c'] == 3
@@ -105,24 +117,6 @@ def test_configdict_key_access_stack_nested(simple_nested_tree):
assert top_keys == ['b', 'f']
-def test_configdict_accessed_all_keys_local(simple_nested_tree):
- root_conf = gather_configtree(simple_nested_tree)
- child_conf = root_conf['b']
-
- root_conf['a']
- root_conf['c']
- root_conf['d-4']
- root_conf['e-5']
-
- assert root_conf.accessed_all_keys(search='local') is False
-
- root_conf['f']
-
- assert root_conf.accessed_all_keys(search='local') is True
-
- assert child_conf.accessed_all_keys(search='local') is False
-
-
def test_configdict_accessed_all_keys_local(simple_nested_tree):
kwargs = {'search': 'local'}
root_conf = gather_configtree(simple_nested_tree)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"numpy>=1.7",
"pandas>=0.15",
"netCDF4>=1.1",
"xarray>=0.8",
"pytest",
"pytest-cov",
"pytest-runner",
"coverage",
"flake8"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
babel==2.17.0
cachetools==5.5.2
certifi==2025.1.31
cftime==1.6.4.post1
chardet==5.2.0
charset-normalizer==3.4.1
colorama==0.4.6
coverage==7.8.0
coveralls==4.0.1
distlib==0.3.9
docopt==0.6.2
docutils==0.21.2
exceptiongroup==1.2.2
filelock==3.18.0
flake8==7.2.0
idna==3.10
imagesize==1.4.1
-e git+https://github.com/ClimateImpactLab/impactlab-tools.git@a56fa03cfe65515c324bfebda79e7906e20ac87d#egg=impactlab_tools
importlib_metadata==8.6.1
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==3.0.2
mccabe==0.7.0
netCDF4==1.7.2
numpy==2.0.2
packaging==24.2
pandas==2.2.3
platformdirs==4.3.7
pluggy==1.5.0
pycodestyle==2.13.0
pyflakes==3.3.2
Pygments==2.19.1
pyproject-api==1.9.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-runner==6.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.32.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinx-rtd-theme==3.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tomli==2.2.1
tox==4.25.0
typing_extensions==4.13.0
tzdata==2025.2
urllib3==2.3.0
virtualenv==20.30.0
xarray==2024.7.0
zipp==3.21.0
| name: impactlab-tools
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- babel==2.17.0
- cachetools==5.5.2
- certifi==2025.1.31
- cftime==1.6.4.post1
- chardet==5.2.0
- charset-normalizer==3.4.1
- colorama==0.4.6
- coverage==7.8.0
- coveralls==4.0.1
- distlib==0.3.9
- docopt==0.6.2
- docutils==0.21.2
- exceptiongroup==1.2.2
- filelock==3.18.0
- flake8==7.2.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==3.0.2
- mccabe==0.7.0
- netcdf4==1.7.2
- numpy==2.0.2
- packaging==24.2
- pandas==2.2.3
- platformdirs==4.3.7
- pluggy==1.5.0
- pycodestyle==2.13.0
- pyflakes==3.3.2
- pygments==2.19.1
- pyproject-api==1.9.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-runner==6.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- requests==2.32.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==7.4.7
- sphinx-rtd-theme==3.0.2
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- tomli==2.2.1
- tox==4.25.0
- typing-extensions==4.13.0
- tzdata==2025.2
- urllib3==2.3.0
- virtualenv==20.30.0
- xarray==2024.7.0
- zipp==3.21.0
prefix: /opt/conda/envs/impactlab-tools
| [
"tests/utils/test_configdict.py::test_gather_configtree_nested_lists"
] | [] | [
"tests/utils/test_configdict.py::test_configdict_climbs_tree",
"tests/utils/test_configdict.py::test_configdict_prefer_child",
"tests/utils/test_configdict.py::test_configdict_selfreference",
"tests/utils/test_configdict.py::test_configdict_caseinsensitive_keys",
"tests/utils/test_configdict.py::test_configdict_normalize",
"tests/utils/test_configdict.py::test_configdict_throws_keyerror",
"tests/utils/test_configdict.py::test_configdict_merge",
"tests/utils/test_configdict.py::test_configdict_merge_parentswap",
"tests/utils/test_configdict.py::test_configdict_key_access_stack",
"tests/utils/test_configdict.py::test_configdict_key_access_stack_nested",
"tests/utils/test_configdict.py::test_configdict_accessed_all_keys_local",
"tests/utils/test_configdict.py::test_configdict_accessed_all_keys_children",
"tests/utils/test_configdict.py::test_configdict_accessed_all_keys_parents"
] | [] | MIT License | 5,470 | 932 | [
"impactlab_tools/utils/configdict.py"
] |
|
encode__httpx-386 | 31730e709597baaa7b2364fee041dfa985169789 | 2019-09-25 20:16:54 | a05ba2e9148c3f74c80c68a727b7e52b3d751c8c | florimondmanca: @tomchristie @sethmlarson Ready for re-review! :-) | diff --git a/httpx/models.py b/httpx/models.py
index f70fdf4..136aa41 100644
--- a/httpx/models.py
+++ b/httpx/models.py
@@ -32,6 +32,7 @@ from .exceptions import (
from .multipart import multipart_encode
from .status_codes import StatusCode
from .utils import (
+ flatten_queryparams,
guess_json_utf,
is_known_encoding,
normalize_header_key,
@@ -51,7 +52,7 @@ URLTypes = typing.Union["URL", str]
QueryParamTypes = typing.Union[
"QueryParams",
- typing.Mapping[str, PrimitiveData],
+ typing.Mapping[str, typing.Union[PrimitiveData, typing.Sequence[PrimitiveData]]],
typing.List[typing.Tuple[str, PrimitiveData]],
str,
]
@@ -311,14 +312,15 @@ class QueryParams(typing.Mapping[str, str]):
value = args[0] if args else kwargs
+ items: typing.Sequence[typing.Tuple[str, PrimitiveData]]
if isinstance(value, str):
items = parse_qsl(value)
elif isinstance(value, QueryParams):
items = value.multi_items()
elif isinstance(value, list):
- items = value # type: ignore
+ items = value
else:
- items = value.items() # type: ignore
+ items = flatten_queryparams(value)
self._list = [(str(k), str_query_param(v)) for k, v in items]
self._dict = {str(k): str_query_param(v) for k, v in items}
diff --git a/httpx/utils.py b/httpx/utils.py
index c8fcb1e..8aea5e7 100644
--- a/httpx/utils.py
+++ b/httpx/utils.py
@@ -1,4 +1,5 @@
import codecs
+import collections
import logging
import netrc
import os
@@ -11,6 +12,9 @@ from time import perf_counter
from types import TracebackType
from urllib.request import getproxies
+if typing.TYPE_CHECKING: # pragma: no cover
+ from .models import PrimitiveData
+
def normalize_header_key(value: typing.AnyStr, encoding: str = None) -> bytes:
"""
@@ -30,7 +34,7 @@ def normalize_header_value(value: typing.AnyStr, encoding: str = None) -> bytes:
return value.encode(encoding or "ascii")
-def str_query_param(value: typing.Optional[typing.Union[str, int, float, bool]]) -> str:
+def str_query_param(value: "PrimitiveData") -> str:
"""
Coerce a primitive data type into a string value for query params.
@@ -256,6 +260,31 @@ def unquote(value: str) -> str:
return value[1:-1] if value[0] == value[-1] == '"' else value
+def flatten_queryparams(
+ queryparams: typing.Mapping[
+ str, typing.Union["PrimitiveData", typing.Sequence["PrimitiveData"]]
+ ]
+) -> typing.List[typing.Tuple[str, "PrimitiveData"]]:
+ """
+ Convert a mapping of query params into a flat list of two-tuples
+ representing each item.
+
+ Example:
+ >>> flatten_queryparams_values({"q": "httpx", "tag": ["python", "dev"]})
+ [("q", "httpx), ("tag", "python"), ("tag", "dev")]
+ """
+ items = []
+
+ for k, v in queryparams.items():
+ if isinstance(v, collections.abc.Sequence) and not isinstance(v, (str, bytes)):
+ for u in v:
+ items.append((k, u))
+ else:
+ items.append((k, typing.cast("PrimitiveData", v)))
+
+ return items
+
+
class ElapsedTimer:
def __init__(self) -> None:
self.start: float = perf_counter()
| Query params should support list-of-strings values.
Hi,
I recently tried to use `httpx` instead of `requests` in my simple app that calls few API endpoints and suddenly it did not work.
The problem here is that `httpx` and `requests` process query parameters differently.
Here's a simple script to see how both libraries process `params`:
```python
from httpx.models import QueryParams
from requests.models import RequestEncodingMixin
dict_params = {
"documentIds": ["a55b2f894b90dd1213201471_25", "b54b2f894b90dd1213201471_9"]
}
httpx_params = str(QueryParams(dict_params))
print("httpx", httpx_params)
requests_params = RequestEncodingMixin._encode_params(dict_params)
print("requests", requests_params)
```
Results:
```
httpx documentIds=%5B%27a55b2f894b90dd1213201471_25%27%2C+%27b54b2f894b90dd1213201471_9%27%5D
requests documentIds=a55b2f894b90dd1213201471_25&documentIds=b54b2f894b90dd1213201471_9
```
If we try to parse the query string `httpx` produced, here's what we get:
```python
from urllib.parse import parse_qs
httpx_params_parsed = parse_qs('documentIds=%5B%27a55b2f894b90dd1213201471_25%27%2C+%27b54b2f894b90dd1213201471_9%27%5D')
print(httpx_params_parsed )
```
Result:
```
{'documentIds': ["['a55b2f894b90dd1213201471_25', 'b54b2f894b90dd1213201471_9']"]}
```
Which is not what you would expect, I guess. At least, it works differently than in `requests`.
If it is an intended behavior, can you please recommend me the proper way of passing a list into the query params?
Thanks! | encode/httpx | diff --git a/tests/models/test_queryparams.py b/tests/models/test_queryparams.py
index 8c4df49..1303170 100644
--- a/tests/models/test_queryparams.py
+++ b/tests/models/test_queryparams.py
@@ -1,8 +1,18 @@
+import pytest
+
from httpx import QueryParams
-def test_queryparams():
- q = QueryParams("a=123&a=456&b=789")
[email protected](
+ "source",
+ [
+ "a=123&a=456&b=789",
+ {"a": ["123", "456"], "b": 789},
+ {"a": ("123", "456"), "b": 789},
+ ],
+)
+def test_queryparams(source):
+ q = QueryParams(source)
assert "a" in q
assert "A" not in q
assert "c" not in q
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 2
} | 0.7 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-asyncio pytest-trio pytest-cov trio trustme uvicorn"
],
"pre_install": [],
"python": "3.7",
"reqs_path": [
"test-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==24.2.0
brotlipy==0.7.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
chardet==3.0.4
click==8.1.8
coverage==7.2.7
cryptography==44.0.2
exceptiongroup==1.2.2
flake8==5.0.4
flake8-bugbear==23.3.12
flake8-comprehensions==3.13.0
flake8-pie==0.16.0
h11==0.8.1
h2==3.2.0
hpack==3.0.0
hstspreload==2025.1.1
-e git+https://github.com/encode/httpx.git@31730e709597baaa7b2364fee041dfa985169789#egg=httpx
hyperframe==5.2.0
idna==2.10
importlib-metadata==4.2.0
iniconfig==2.0.0
isort==5.11.5
mccabe==0.7.0
mypy==1.4.1
mypy-extensions==1.0.0
outcome==1.3.0.post0
packaging==24.0
pluggy==1.2.0
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
pytest==7.4.4
pytest-asyncio==0.21.2
pytest-cov==4.1.0
pytest-trio==0.8.0
rfc3986==1.5.0
sniffio==1.3.1
sortedcontainers==2.4.0
tomli==2.0.1
trio==0.22.2
trustme==1.0.0
typed-ast==1.5.5
typing_extensions==4.7.1
uvicorn==0.22.0
uvloop==0.12.2
zipp==3.15.0
| name: httpx
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==24.2.0
- brotlipy==0.7.0
- cffi==1.15.1
- chardet==3.0.4
- click==8.1.8
- coverage==7.2.7
- cryptography==44.0.2
- exceptiongroup==1.2.2
- flake8==5.0.4
- flake8-bugbear==23.3.12
- flake8-comprehensions==3.13.0
- flake8-pie==0.16.0
- h11==0.8.1
- h2==3.2.0
- hpack==3.0.0
- hstspreload==2025.1.1
- hyperframe==5.2.0
- idna==2.10
- importlib-metadata==4.2.0
- iniconfig==2.0.0
- isort==5.11.5
- mccabe==0.7.0
- mypy==1.4.1
- mypy-extensions==1.0.0
- outcome==1.3.0.post0
- packaging==24.0
- pluggy==1.2.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pytest==7.4.4
- pytest-asyncio==0.21.2
- pytest-cov==4.1.0
- pytest-trio==0.8.0
- rfc3986==1.5.0
- sniffio==1.3.1
- sortedcontainers==2.4.0
- tomli==2.0.1
- trio==0.22.2
- trustme==1.0.0
- typed-ast==1.5.5
- typing-extensions==4.7.1
- uvicorn==0.22.0
- uvloop==0.12.2
- zipp==3.15.0
prefix: /opt/conda/envs/httpx
| [
"tests/models/test_queryparams.py::test_queryparams[source1]",
"tests/models/test_queryparams.py::test_queryparams[source2]"
] | [] | [
"tests/models/test_queryparams.py::test_queryparams[a=123&a=456&b=789]",
"tests/models/test_queryparams.py::test_queryparam_types",
"tests/models/test_queryparams.py::test_queryparam_setters"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,474 | 897 | [
"httpx/models.py",
"httpx/utils.py"
] |
lmfit__lmfit-py-589 | ffe550c9c1e3f2c6b82b7175dd101f5c6cd8f685 | 2019-09-26 18:45:37 | 449c6ed9f2d70c853507933c0e5f39c2ff75635e | codecov[bot]: # [Codecov](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=h1) Report
> Merging [#589](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=desc) into [master](https://codecov.io/gh/lmfit/lmfit-py/commit/d7b408baa64821f4356cff9c32a199bc53becd93?src=pr&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #589 +/- ##
=========================================
+ Coverage 90.46% 90.5% +0.03%
=========================================
Files 11 11
Lines 3263 3274 +11
=========================================
+ Hits 2952 2963 +11
Misses 311 311
```
| [Impacted Files](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [lmfit/minimizer.py](https://codecov.io/gh/lmfit/lmfit-py/pull/589/diff?src=pr&el=tree#diff-bG1maXQvbWluaW1pemVyLnB5) | `91.59% <100%> (+0.11%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=footer). Last update [d7b408b...36a022f](https://codecov.io/gh/lmfit/lmfit-py/pull/589?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
newville: @ezwelty Like @reneeotten I'm mostly OK with this.
I have two concerns: First, I think that @reneeotten are unlikely to be able to support this. Adding unsupported features seems like maybe not such a great idea. Are you willing to support this?
The example you gave is really close to trivial. That's OK for a test -- it shows the code "works". But it leads to a second concern: What happens if someone places bounds on a parameter or even fixes a parameter? Like, why is the sparse jacobian a 2-d array? Why is not more naturally (for lmfit) a dictionary with parameter names as keys and 1d arrays of the sparsity of that derivative component?
reneeotten: @newville I am not sure I understand your concerns completely. To me it seems the "only" thing this PR does is making sure that whatever type of Jacobian one uses, it will return the Hessian matrix in the same way.
The ```least_squares``` method already supports bounds and so the resulting Hessian will be already correct (right now we also don't transform the ```result.jac``` from ```least_squares```). In the test-case it is a 2x2 matrix because there are two fitting parameters, if you'd fix one of them it should be only one value (but I agree it wouldn't hurt to add another test where parameters have bounds and some are fixed, just to make sure).
The resulting Hessian matrix will be used in the "lmfit-way" to calculate the uncertainties using:
```
result.covar = np.linalg.inv(hess)
self._calculate_uncertainties_correlations()
```
and I don't see anything wrong with that; but perhaps, I'm missing something here and, if so, can you elaborate a bit?
newville: @reneeotten I agree that bounds ought to be OK. Maybe that would be good to test.
If you have a Parameter with `vary=False` you now have fewer variables than Parameters. The order of the variables is predictable, but the Jacobian (and so its sparsity) has changed size. If the Parameter is fixed, the Jacobin is not just sparse, it is non-existent.
With this PR (and #588), the user can provide a `jac_sparsity` as input. That is OK. But that is a 2D array that is Nvariables X Nobservations.
Nothing else in `lmfit` accesses variables by index, and it can be confusing which variable is in "position 2" -- the user can see this after the fit from `var_names`. But they don't have to know this order when they do the fit.
If the user instead provided a dictionary of 1D derivatives and sparsities or made these as Parameter attributes then `minimize()` could construct the internal `jac_sparsity` 2D array that gets sent to `trf`.
Does that make sense?
ezwelty: @newville I see your point.
With this PR, the user can now pass `jac_sparsity` and all allowed flavors of `jac` to `scipy.optimize.least_squares` via optional keyword arguments. It is, however, also up to the user to make sure that these are consistent with the order and inclusion (`vary=True/False`) of parameters passed by `lmfit`. (Bounds do not change the dimensions of the Jacobian). This isn't only true of this PR – `jac` (returning an array) is implicitly supported previous to this PR, as well as passing `x_scale` or `diff_step`, and the same disclaimer holds. I would suggest that protecting the user from passing misshapen optional arguments to `least_squares` is beyond the scope of this PR, and this PR will be needed regardless. In the meantime, I think it is more helpful to empower users to make use of these optional features than to block them from doing so.
If folks agree, I can brush up the pull request as requested by @reneeotten.
newville: @ezwelty @reneeotten Any thoughts on the status of this PR? I'm OK with merging, if the requested changes and conflicts are resolved.
It does sort of seem like we should improve the interface for Jacobians, but I think that is my job and shouldn't hold up this PR.
ezwelty: @newville Sorry, I wasn't sure I was cleared to move forward. I will brush up the PR as requested by @reneeotten. | diff --git a/lmfit/minimizer.py b/lmfit/minimizer.py
index ce2cbed5..22c275be 100644
--- a/lmfit/minimizer.py
+++ b/lmfit/minimizer.py
@@ -29,6 +29,8 @@ from scipy.optimize import dual_annealing as scipy_dual_annealing
from scipy.optimize import leastsq as scipy_leastsq
from scipy.optimize import minimize as scipy_minimize
from scipy.optimize import shgo as scipy_shgo
+from scipy.sparse import issparse
+from scipy.sparse.linalg import LinearOperator
from scipy.stats import cauchy as cauchy_dist
from scipy.stats import norm as norm_dist
from scipy.version import version as scipy_version
@@ -1408,7 +1410,21 @@ class Minimizer(object):
# calculate the cov_x and estimate uncertainties/correlations
try:
- hess = np.matmul(ret.jac.T, ret.jac)
+ if issparse(ret.jac):
+ hess = (ret.jac.T * ret.jac).toarray()
+ elif isinstance(ret.jac, LinearOperator):
+ identity = np.eye(ret.jac.shape[1], dtype=ret.jac.dtype)
+ # TODO: Remove try-except when scipy < 1.4.0 support dropped
+ try:
+ # For scipy >= 1.4.0 (with Linear Operator transpose)
+ # https://github.com/scipy/scipy/pull/9064
+ hess = (ret.jac.T * ret.jac) * identity
+ except AttributeError:
+ # For scipy < 1.4.0 (without Linear Operator transpose)
+ jac = ret.jac * identity
+ hess = np.matmul(jac.T, jac)
+ else:
+ hess = np.matmul(ret.jac.T, ret.jac)
result.covar = np.linalg.inv(hess)
self._calculate_uncertainties_correlations()
except LinAlgError:
| Passing jac_sparsity to least_squares breaks minimize()
#### Description
Calls of the form:
```python
lmfit.minimize(..., method="least_squares", jac_sparsity=obj)
```
result in the Jacobian being returned as a sparse matrix, which breaks the calculation of the Hessian:
https://github.com/lmfit/lmfit-py/blob/d7b408baa64821f4356cff9c32a199bc53becd93/lmfit/minimizer.py#L1430
This should be replaced with something like:
```python
if scipy.sparse.issparse(ret.jac):
jac = ret.jac.todense()
else:
jac = ret.jac
np.matmul(jac.T, jac)
```
or, perhaps faster but python 3 only:
```python
hess = ret.jac.T @ ret.jac
if scipy.sparse.issparse(hess):
hess = hess.todense()
```
#### A Minimal, Complete, and Verifiable example
```python
import lmfit
def f(params):
return params['x'] - 1, params['y'] - 1
params = lmfit.Parameters()
params.add('x', value=0)
params.add('y', value=0)
out = lmfit.minimize(f, params, method='least_squares', jac_sparsity=[[1, 1], [1, 1]])
```
#### Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/Admin/.pyenv/versions/3.7.3/Python.framework/Versions/3.7/lib/python3.7/site-packages/lmfit/minimizer.py", line 2448
, in minimize
return fitter.minimize(method=method)
File "/Users/Admin/.pyenv/versions/3.7.3/Python.framework/Versions/3.7/lib/python3.7/site-packages/lmfit/minimizer.py", line 2117
, in minimize
return function(**kwargs)
File "/Users/Admin/.pyenv/versions/3.7.3/Python.framework/Versions/3.7/lib/python3.7/site-packages/lmfit/minimizer.py", line 1430
, in least_squares
hess = np.matmul(ret.jac.T, ret.jac)
ValueError: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requi
res 1)
```
#### Version information
```
Python: 3.7.3 (default, May 26 2019, 09:41:28)
[Clang 10.0.1 (clang-1001.0.46.4)]
lmfit: 0.9.14, scipy: 1.3.0, numpy: 1.16.3, asteval: 0.9.14, uncertainties: 3.1, six: 1.12.0
``` | lmfit/lmfit-py | diff --git a/tests/test_least_squares.py b/tests/test_least_squares.py
index 778c2049..b9aa64e0 100644
--- a/tests/test_least_squares.py
+++ b/tests/test_least_squares.py
@@ -2,6 +2,8 @@
import numpy as np
from numpy.testing import assert_allclose
import pytest
+from scipy.sparse import bsr_matrix
+from scipy.sparse.linalg import aslinearoperator
import lmfit
from lmfit.models import VoigtModel
@@ -116,3 +118,52 @@ def test_least_squares_solver_options(peakdata, capsys):
assert 'Iteration' in captured.out
assert 'final cost' in captured.out
+
+
+def test_least_squares_jacobian_types():
+ """Test support for Jacobian of all types supported by least_squares."""
+ # Build function
+ # f(x, y) = (x - a)^2 + (y - b)^2
+ np.random.seed(42)
+ a = np.random.normal(0, 1, 50)
+ np.random.seed(43)
+ b = np.random.normal(0, 1, 50)
+
+ def f(params):
+ return (params['x'] - a)**2 + (params['y'] - b)**2
+
+ # Build analytic Jacobian functions with the different possible return types
+ # numpy.ndarray, scipy.sparse.spmatrix, scipy.sparse.linalg.LinearOperator
+ # J = [ 2x - 2a , 2y - 2b ]
+ def jac_array(params, *args, **kwargs):
+ return np.column_stack((2 * params[0] - 2 * a, 2 * params[1] - 2 * b))
+
+ def jac_sparse(params, *args, **kwargs):
+ return bsr_matrix(jac_array(params, *args, **kwargs))
+
+ def jac_operator(params, *args, **kwargs):
+ return aslinearoperator(jac_array(params, *args, **kwargs))
+ # Build parameters
+ params = lmfit.Parameters()
+ params.add('x', value=0)
+ params.add('y', value=0)
+ # Solve model for numerical Jacobian and each analytic Jacobian function
+ result = lmfit.minimize(f, params, method='least_squares')
+ result_array = lmfit.minimize(
+ f, params, method='least_squares',
+ jac=jac_array)
+ result_sparse = lmfit.minimize(
+ f, params, method='least_squares',
+ jac=jac_sparse)
+ result_operator = lmfit.minimize(
+ f, params, method='least_squares',
+ jac=jac_operator)
+ # Check that all have uncertainties
+ assert result.errorbars
+ assert result_array.errorbars
+ assert result_sparse.errorbars
+ assert result_operator.errorbars
+ # Check that all have ~equal covariance matrix
+ assert_allclose(result.covar, result_array.covar)
+ assert_allclose(result.covar, result_sparse.covar)
+ assert_allclose(result.covar, result_operator.covar)
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 0.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"coverage",
"codecov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asteval==0.9.31
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
codecov==2.1.13
coverage==7.2.7
exceptiongroup==1.2.2
future==1.0.0
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
-e git+https://github.com/lmfit/lmfit-py.git@ffe550c9c1e3f2c6b82b7175dd101f5c6cd8f685#egg=lmfit
numpy==1.21.6
packaging==24.0
pluggy==1.2.0
pytest==7.4.4
requests==2.31.0
scipy==1.7.3
six==1.17.0
tomli==2.0.1
typing_extensions==4.7.1
uncertainties==3.1.7
urllib3==2.0.7
zipp==3.15.0
| name: lmfit-py
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- asteval==0.9.31
- charset-normalizer==3.4.1
- codecov==2.1.13
- coverage==7.2.7
- exceptiongroup==1.2.2
- future==1.0.0
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- numpy==1.21.6
- packaging==24.0
- pluggy==1.2.0
- pytest==7.4.4
- requests==2.31.0
- scipy==1.7.3
- six==1.17.0
- tomli==2.0.1
- typing-extensions==4.7.1
- uncertainties==3.1.7
- urllib3==2.0.7
- zipp==3.15.0
prefix: /opt/conda/envs/lmfit-py
| [
"tests/test_least_squares.py::test_least_squares_jacobian_types"
] | [] | [
"tests/test_least_squares.py::test_least_squares_with_bounds",
"tests/test_least_squares.py::test_least_squares_cov_x[False]",
"tests/test_least_squares.py::test_least_squares_cov_x[True]",
"tests/test_least_squares.py::test_least_squares_solver_options"
] | [] | BSD-3 | 5,480 | 453 | [
"lmfit/minimizer.py"
] |
griffithlab__civicpy-49 | 37a2f8b3cb75dc46b70ffa5884d69ffc4d29cf04 | 2019-09-26 20:35:45 | 4f395debb88181f11230097386dd9a104daf21c4 | diff --git a/civicpy/civic.py b/civicpy/civic.py
index 203f39f..f6cf475 100644
--- a/civicpy/civic.py
+++ b/civicpy/civic.py
@@ -850,8 +850,8 @@ def bulk_search_variants_by_coordinates(sorted_queries, search_mode='any'):
:param search_mode: ['any', 'query_encompassing', 'variant_encompassing', 'exact']
any: any overlap between a query and a variant is a match
- query_encompassing: variants must fit within the coordinates of the query
- variant_encompassing: variants must encompass the coordinates of the query
+ query_encompassing: CIViC variant records must fit within the coordinates of the query
+ record_encompassing: CIViC variant records must encompass the coordinates of the query
exact: variants must match coordinates precisely, as well as alternate
allele, if provided
search_mode is 'exact' by default
@@ -881,6 +881,10 @@ def bulk_search_variants_by_coordinates(sorted_queries, search_mode='any'):
ct = MODULE.COORDINATE_TABLE
matches = defaultdict(list)
Match = namedtuple('Match', ct.columns)
+
+ def append_match(matches_list, query, ct_row):
+ matches_list[query].append(Match(**ct_row.to_dict()))
+
while query_pointer < len(sorted_queries) and ct_pointer < len(ct):
if last_query_pointer != query_pointer:
q = sorted_queries[query_pointer]
@@ -908,16 +912,16 @@ def bulk_search_variants_by_coordinates(sorted_queries, search_mode='any'):
query_pointer += 1
continue
if search_mode == 'any':
- matches[q].append(c.to_dict())
+ append_match(matches, q, c)
elif search_mode == 'exact' and q_start == c_start and q_stop == c_stop:
q_alt = q.alt
c_alt = c.alt
if not (q_alt and c_alt and q_alt != c_alt):
- matches[q].append(Match(**c.to_dict()))
- elif search_mode == 'query_encompassing':
- raise NotImplementedError
- elif search_mode == 'variant_encompassing':
- raise NotImplementedError
+ append_match(matches, q, c)
+ elif search_mode == 'query_encompassing' and q_start <= c_start and q_stop >= c_stop:
+ append_match(matches, q, c)
+ elif search_mode == 'record_encompassing' and c_start <= q_start and c_stop >= q_stop:
+ append_match(matches, q, c)
if match_start is None:
match_start = ct_pointer
ct_pointer += 1
| implement intermediate search strategies
Query encompassing and record encompassing search strategies need to be implemented for bulk coordinate search. See Figure 2D from manuscript for details. | griffithlab/civicpy | diff --git a/civicpy/tests/test_civic.py b/civicpy/tests/test_civic.py
index dc6e337..bbc6bb7 100644
--- a/civicpy/tests/test_civic.py
+++ b/civicpy/tests/test_civic.py
@@ -133,11 +133,38 @@ class TestCoordinateSearch(object):
assertion_ids = [x.id for x in assertions]
assert set(assertion_ids) >= set(v600e_assertion_ids)
- def test_bulk_search_variants(self):
+ def test_bulk_any_search_variants(self):
sorted_queries = [
CoordinateQuery('7', 140453136, 140453136, 'T'),
CoordinateQuery('7', 140453136, 140453137, 'TT')
]
- search_results = civic.bulk_search_variants_by_coordinates(sorted_queries)
- assert len(search_results[sorted_queries[0]]) >= 12
- assert len(search_results[sorted_queries[1]]) >= len(search_results[sorted_queries[0]])
+ search_results = civic.bulk_search_variants_by_coordinates(sorted_queries, search_mode='any')
+ assert len(search_results[sorted_queries[0]]) == 19
+ assert len(search_results[sorted_queries[1]]) >= 19
+
+ def test_bulk_exact_search_variants(self):
+ sorted_queries = [
+ CoordinateQuery('7', 140453136, 140453136, 'T'),
+ CoordinateQuery('7', 140453136, 140453137, 'TT')
+ ]
+ search_results = civic.bulk_search_variants_by_coordinates(sorted_queries, search_mode='exact')
+ assert len(search_results[sorted_queries[0]]) == 1
+ assert len(search_results[sorted_queries[1]]) == 2
+
+ def test_bulk_qe_search_variants(self):
+ sorted_queries = [
+ CoordinateQuery('7', 140453136, 140453136),
+ CoordinateQuery('7', 140453136, 140453137)
+ ]
+ search_results = civic.bulk_search_variants_by_coordinates(sorted_queries, search_mode='query_encompassing')
+ assert len(search_results[sorted_queries[0]]) == 1
+ assert len(search_results[sorted_queries[1]]) == 4
+
+ def test_bulk_re_search_variants(self):
+ sorted_queries = [
+ CoordinateQuery('7', 140453136, 140453136),
+ CoordinateQuery('7', 140453136, 140453137)
+ ]
+ search_results = civic.bulk_search_variants_by_coordinates(sorted_queries, search_mode='record_encompassing')
+ assert len(search_results[sorted_queries[0]]) == 19
+ assert len(search_results[sorted_queries[1]]) == 16
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 1
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
-e git+https://github.com/griffithlab/civicpy.git@37a2f8b3cb75dc46b70ffa5884d69ffc4d29cf04#egg=civicpy
Click==7.0
coverage==7.2.7
exceptiongroup==1.2.2
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
networkx==2.6.3
numpy==1.21.6
obonet==0.2.3
packaging==24.0
pandas==0.24.1
pluggy==1.2.0
pytest==7.4.4
pytest-cov==4.1.0
python-coveralls==2.9.3
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.1
requests==2.31.0
six==1.17.0
tomli==2.0.1
typing_extensions==4.7.1
urllib3==2.0.7
zipp==3.15.0
| name: civicpy
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- charset-normalizer==3.4.1
- click==7.0
- coverage==7.2.7
- exceptiongroup==1.2.2
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- networkx==2.6.3
- numpy==1.21.6
- obonet==0.2.3
- packaging==24.0
- pandas==0.24.1
- pluggy==1.2.0
- pytest==7.4.4
- pytest-cov==4.1.0
- python-coveralls==2.9.3
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- requests==2.31.0
- six==1.17.0
- tomli==2.0.1
- typing-extensions==4.7.1
- urllib3==2.0.7
- zipp==3.15.0
prefix: /opt/conda/envs/civicpy
| [
"civicpy/tests/test_civic.py::TestCoordinateSearch::test_bulk_qe_search_variants",
"civicpy/tests/test_civic.py::TestCoordinateSearch::test_bulk_re_search_variants"
] | [] | [
"civicpy/tests/test_civic.py::TestGetFunctions::test_get_assertions",
"civicpy/tests/test_civic.py::TestCivicRecord::test_module",
"civicpy/tests/test_civic.py::TestEvidence::test_get_source_ids",
"civicpy/tests/test_civic.py::TestEvidence::test_get_all",
"civicpy/tests/test_civic.py::TestEvidence::test_get_non_rejected",
"civicpy/tests/test_civic.py::TestEvidence::test_get_accepted_only",
"civicpy/tests/test_civic.py::TestVariants::test_get_all",
"civicpy/tests/test_civic.py::TestVariants::test_get_non_rejected",
"civicpy/tests/test_civic.py::TestVariants::test_get_accepted_only",
"civicpy/tests/test_civic.py::TestAssertions::test_get_all",
"civicpy/tests/test_civic.py::TestAssertions::test_get_non_rejected",
"civicpy/tests/test_civic.py::TestAssertions::test_get_accepted_only",
"civicpy/tests/test_civic.py::TestGenes::test_get_all",
"civicpy/tests/test_civic.py::TestGenes::test_get_non_rejected",
"civicpy/tests/test_civic.py::TestGenes::test_get_accepted_only",
"civicpy/tests/test_civic.py::TestCoordinateSearch::test_search_assertions",
"civicpy/tests/test_civic.py::TestCoordinateSearch::test_bulk_any_search_variants",
"civicpy/tests/test_civic.py::TestCoordinateSearch::test_bulk_exact_search_variants"
] | [] | MIT License | 5,483 | 624 | [
"civicpy/civic.py"
] |
|
bids-standard__pybids-511 | 1431df11a7748848f6c4170f19211321ab0c4b9f | 2019-09-27 13:53:13 | cfb6288e10f04f76a7bd84ae2e862ecb989a8b21 | diff --git a/bids/layout/layout.py b/bids/layout/layout.py
index d274f2f6..bd5a7e09 100644
--- a/bids/layout/layout.py
+++ b/bids/layout/layout.py
@@ -829,7 +829,7 @@ class BIDSLayout(object):
val_clause = sa.or_(*[Tag._value.op('REGEXP')(str(v))
for v in val])
else:
- val_clause = Tag._value.op('REGEXP')(val)
+ val_clause = Tag._value.op('REGEXP')(str(val))
else:
if isinstance(val, (list, tuple)):
val_clause = Tag._value.in_(val)
| combination of regex_search and run makes sql barf
This looks like a fun one.
The code worked in 0.9.3, but is broken in 0.9.4.
If you change the line in the binder notebook from:
```
# Retrieve filenames of all BOLD runs for subject 01
layout.get(subject='01', extension='nii.gz', suffix='bold', return_type='filename')
```
to:
```
# Retrieve filenames of all BOLD runs for subject 01
layout.get(subject='01', extension='nii.gz', suffix='bold', regex_search=True, run=1, return_type='filename')
```
you will get this error:
```
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args)
1248 self.dialect.do_execute(
-> 1249 cursor, statement, parameters, context
1250 )
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/default.py in do_execute(self, cursor, statement, parameters, context)
551 def do_execute(self, cursor, statement, parameters, context=None):
--> 552 cursor.execute(statement, parameters)
553
OperationalError: user-defined function raised exception
The above exception was the direct cause of the following exception:
OperationalError Traceback (most recent call last)
<ipython-input-6-7db3897f9b6e> in <module>
1 # Retrieve filenames of all BOLD runs for subject 01
----> 2 layout.get(subject='01', extension='nii.gz', suffix='bold', regex_search=True, run=1, return_type='filename')
/srv/conda/envs/notebook/lib/python3.7/site-packages/bids/layout/layout.py in get(self, return_type, target, scope, regex_search, absolute_paths, drop_invalid_filters, **filters)
728 query = query.options(joinedload(BIDSFile.tags)
729 .joinedload(Tag.entity))
--> 730 results.extend(query.all())
731
732 # Convert to relative paths if needed
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/orm/query.py in all(self)
3176
3177 """
-> 3178 return list(self)
3179
3180 @_generative(_no_clauseelement_condition)
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/orm/query.py in __iter__(self)
3332 if self._autoflush and not self._populate_existing:
3333 self.session._autoflush()
-> 3334 return self._execute_and_instances(context)
3335
3336 def __str__(self):
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/orm/query.py in _execute_and_instances(self, querycontext)
3357 )
3358
-> 3359 result = conn.execute(querycontext.statement, self._params)
3360 return loading.instances(querycontext.query, result, querycontext)
3361
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in execute(self, object_, *multiparams, **params)
986 raise exc.ObjectNotExecutableError(object_)
987 else:
--> 988 return meth(self, multiparams, params)
989
990 def _execute_function(self, func, multiparams, params):
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/sql/elements.py in _execute_on_connection(self, connection, multiparams, params)
285 def _execute_on_connection(self, connection, multiparams, params):
286 if self.supports_execution:
--> 287 return connection._execute_clauseelement(self, multiparams, params)
288 else:
289 raise exc.ObjectNotExecutableError(self)
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_clauseelement(self, elem, multiparams, params)
1105 distilled_params,
1106 compiled_sql,
-> 1107 distilled_params,
1108 )
1109 if self._has_events or self.engine._has_events:
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args)
1251 except BaseException as e:
1252 self._handle_dbapi_exception(
-> 1253 e, statement, parameters, cursor, context
1254 )
1255
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _handle_dbapi_exception(self, e, statement, parameters, cursor, context)
1471 util.raise_from_cause(newraise, exc_info)
1472 elif should_wrap:
-> 1473 util.raise_from_cause(sqlalchemy_exception, exc_info)
1474 else:
1475 util.reraise(*exc_info)
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/util/compat.py in raise_from_cause(exception, exc_info)
396 exc_type, exc_value, exc_tb = exc_info
397 cause = exc_value if exc_value is not exception else None
--> 398 reraise(type(exception), exception, tb=exc_tb, cause=cause)
399
400
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/util/compat.py in reraise(tp, value, tb, cause)
150 value.__cause__ = cause
151 if value.__traceback__ is not tb:
--> 152 raise value.with_traceback(tb)
153 raise value
154
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args)
1247 if not evt_handled:
1248 self.dialect.do_execute(
-> 1249 cursor, statement, parameters, context
1250 )
1251 except BaseException as e:
/srv/conda/envs/notebook/lib/python3.7/site-packages/sqlalchemy/engine/default.py in do_execute(self, cursor, statement, parameters, context)
550
551 def do_execute(self, cursor, statement, parameters, context=None):
--> 552 cursor.execute(statement, parameters)
553
554 def do_execute_no_params(self, cursor, statement, context=None):
OperationalError: (sqlite3.OperationalError) user-defined function raised exception
[SQL: SELECT files.path AS files_path, files.filename AS files_filename, files.dirname AS files_dirname, files.is_dir AS files_is_dir, files.class_ AS files_class_, entities_1.name AS entities_1_name, entities_1.mandatory AS entities_1_mandatory, entities_1.pattern AS entities_1_pattern, entities_1.directory AS entities_1_directory, entities_1.is_metadata AS entities_1_is_metadata, entities_1._dtype AS entities_1__dtype, tags_1.file_path AS tags_1_file_path, tags_1.entity_name AS tags_1_entity_name, tags_1._value AS tags_1__value, tags_1._dtype AS tags_1__dtype
FROM files JOIN tags ON files.path = tags.file_path LEFT OUTER JOIN tags AS tags_1 ON files.path = tags_1.file_path LEFT OUTER JOIN entities AS entities_1 ON entities_1.name = tags_1.entity_name
WHERE files.is_dir = 0 AND (EXISTS (SELECT 1
FROM tags
WHERE files.path = tags.file_path AND tags.entity_name = ? AND (tags._value REGEXP ?))) AND (EXISTS (SELECT 1
FROM tags
WHERE files.path = tags.file_path AND tags.entity_name = ? AND (tags._value REGEXP ?))) AND (EXISTS (SELECT 1
FROM tags
WHERE files.path = tags.file_path AND tags.entity_name = ? AND (tags._value REGEXP ?))) AND (EXISTS (SELECT 1
FROM tags
WHERE files.path = tags.file_path AND tags.entity_name = ? AND (tags._value REGEXP ?)))]
[parameters: ('subject', '01', 'extension', 'nii.gz', 'suffix', 'bold', 'run', 1)]
(Background on this error at: http://sqlalche.me/e/e3q8)
```
This is a relevant query for NiBetaSeries since we [want to only select certain files in the derivatives directory](https://github.com/HBClab/NiBetaSeries/blob/604622a7a76c37c2aeecfb791cf34950b60792dd/src/nibetaseries/workflows/utils.py#L19-L39), and so we do this using the `desc` keyword being set to anything. I see I should change this to `scope: derivatives` for my use case, but there may be others out there that want to filter based on `run` and `regex_search` with another entity.
ref HBClab/NiBetaSeries#204 | bids-standard/pybids | diff --git a/bids/layout/tests/test_layout.py b/bids/layout/tests/test_layout.py
index dd208baf..90176ba6 100644
--- a/bids/layout/tests/test_layout.py
+++ b/bids/layout/tests/test_layout.py
@@ -587,3 +587,33 @@ def test_get_with_wrong_dtypes(layout_7t_trt):
l.get(run=[1, '15']))
assert not l.get(run='not_numeric')
assert l.get(session=1) == l.get(session='1')
+
+
+def test_get_with_regex_search(layout_7t_trt):
+ """ Tests that regex-based searching works. """
+ l = layout_7t_trt
+
+ # subject matches both '10' and '01'
+ results = l.get(subject='1', session='1', task='rest', suffix='bold',
+ acquisition='fron.al', extension='nii.gz',
+ regex_search=True)
+ assert len(results) == 2
+
+ # subject matches '10'
+ results = l.get(subject='^1', session='1', task='rest', suffix='bold',
+ acquisition='fron.al', extension='nii.gz',
+ regex_search=True, return_type='filename')
+ assert len(results) == 1
+ assert results[0].endswith('sub-10_ses-1_task-rest_acq-prefrontal_bold.nii.gz')
+
+
+def test_get_with_regex_search_bad_dtype(layout_7t_trt):
+ """ Tests that passing in a non-string dtype for an entity doesn't crash
+ regexp-based searching (i.e., that implicit conversion is done
+ appropriately). """
+ l = layout_7t_trt
+ results = l.get(subject='1', run=1, task='rest', suffix='bold',
+ acquisition='fullbrain', extension='nii.gz',
+ regex_search=True)
+ # Two runs (1 per session) for each of subjects '10' and '01'
+ assert len(results) == 4
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 1
} | 0.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[analysis]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
bids-validator==1.14.0
certifi==2021.5.30
docopt==0.6.2
greenlet==2.0.2
importlib-metadata==4.8.3
iniconfig==1.1.1
nibabel==3.2.2
num2words==0.5.14
numpy==1.19.5
packaging==21.3
pandas==1.1.5
patsy==1.0.1
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/bids-standard/pybids.git@1431df11a7748848f6c4170f19211321ab0c4b9f#egg=pybids
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
scipy==1.5.4
six==1.17.0
SQLAlchemy==1.4.54
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: pybids
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- bids-validator==1.14.0
- docopt==0.6.2
- greenlet==2.0.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- nibabel==3.2.2
- num2words==0.5.14
- numpy==1.19.5
- packaging==21.3
- pandas==1.1.5
- patsy==1.0.1
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- scipy==1.5.4
- six==1.17.0
- sqlalchemy==1.4.54
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/pybids
| [
"bids/layout/tests/test_layout.py::test_get_with_regex_search_bad_dtype"
] | [] | [
"bids/layout/tests/test_layout.py::test_layout_init",
"bids/layout/tests/test_layout.py::test_layout_repr",
"bids/layout/tests/test_layout.py::test_load_description",
"bids/layout/tests/test_layout.py::test_get_file",
"bids/layout/tests/test_layout.py::test_get_metadata",
"bids/layout/tests/test_layout.py::test_get_metadata2",
"bids/layout/tests/test_layout.py::test_get_metadata3",
"bids/layout/tests/test_layout.py::test_get_metadata4",
"bids/layout/tests/test_layout.py::test_get_metadata_meg",
"bids/layout/tests/test_layout.py::test_get_metadata5",
"bids/layout/tests/test_layout.py::test_get_metadata_via_bidsfile",
"bids/layout/tests/test_layout.py::test_get_with_bad_target",
"bids/layout/tests/test_layout.py::test_get_bvals_bvecs",
"bids/layout/tests/test_layout.py::test_get_subjects",
"bids/layout/tests/test_layout.py::test_get_fieldmap",
"bids/layout/tests/test_layout.py::test_get_fieldmap2",
"bids/layout/tests/test_layout.py::test_bids_json",
"bids/layout/tests/test_layout.py::test_get_return_type_dir",
"bids/layout/tests/test_layout.py::test_get_val_none",
"bids/layout/tests/test_layout.py::test_get_return_sorted",
"bids/layout/tests/test_layout.py::test_ignore_files",
"bids/layout/tests/test_layout.py::test_force_index",
"bids/layout/tests/test_layout.py::test_nested_include_exclude",
"bids/layout/tests/test_layout.py::test_nested_include_exclude_with_regex",
"bids/layout/tests/test_layout.py::test_layout_with_derivs",
"bids/layout/tests/test_layout.py::test_layout_with_multi_derivs",
"bids/layout/tests/test_layout.py::test_query_derivatives",
"bids/layout/tests/test_layout.py::test_restricted_words_in_path",
"bids/layout/tests/test_layout.py::test_derivative_getters",
"bids/layout/tests/test_layout.py::test_get_tr",
"bids/layout/tests/test_layout.py::test_to_df",
"bids/layout/tests/test_layout.py::test_parse_file_entities",
"bids/layout/tests/test_layout.py::test_parse_file_entities_from_layout",
"bids/layout/tests/test_layout.py::test_deriv_indexing",
"bids/layout/tests/test_layout.py::test_add_config_paths",
"bids/layout/tests/test_layout.py::test_layout_in_scope",
"bids/layout/tests/test_layout.py::test_get_layouts_in_scope",
"bids/layout/tests/test_layout.py::test_get_dataset_description",
"bids/layout/tests/test_layout.py::test_indexed_file_associations",
"bids/layout/tests/test_layout.py::test_layout_save",
"bids/layout/tests/test_layout.py::test_indexing_tag_conflict",
"bids/layout/tests/test_layout.py::test_get_with_wrong_dtypes",
"bids/layout/tests/test_layout.py::test_get_with_regex_search"
] | [] | MIT License | 5,488 | 161 | [
"bids/layout/layout.py"
] |
|
steemit__hivemind-255 | a50ffb074b422956f9ccf79fb49eb0a8969b494e | 2019-09-27 17:04:29 | a50ffb074b422956f9ccf79fb49eb0a8969b494e | diff --git a/hive/indexer/accounts.py b/hive/indexer/accounts.py
index 3c8de1b..485320c 100644
--- a/hive/indexer/accounts.py
+++ b/hive/indexer/accounts.py
@@ -186,6 +186,7 @@ class Accounts:
# pull out valid profile md and delete the key
profile = safe_profile_metadata(account)
del account['json_metadata']
+ del account['posting_json_metadata']
active_at = max(account['created'],
account['last_account_update'],
diff --git a/hive/utils/account.py b/hive/utils/account.py
index fe1302d..caf3830 100644
--- a/hive/utils/account.py
+++ b/hive/utils/account.py
@@ -6,12 +6,19 @@ from hive.utils.normalize import trunc
def safe_profile_metadata(account):
"""Given an account, return sanitized profile data."""
prof = {}
+
try:
- prof = json.loads(account['json_metadata'])['profile']
- if not isinstance(prof, dict):
- prof = {}
+ # read from posting_json_metadata, if version==2
+ prof = json.loads(account['posting_json_metadata'])['profile']
+ assert isinstance(prof, dict)
+ assert 'version' in prof and prof['version'] == 2
except Exception:
- pass
+ try:
+ # fallback to json_metadata
+ prof = json.loads(account['json_metadata'])['profile']
+ assert isinstance(prof, dict)
+ except Exception:
+ prof = {}
name = str(prof['name']) if 'name' in prof else None
about = str(prof['about']) if 'about' in prof else None
| Support for account_update2 / posting_json_metadata
I broadcasted posting_json_metadata using the new account_update2 operation (https://steemd.com/@tftest1), but unlike when broadcasting json_metadata, this does not change the information that are indexed in hive_accounts | steemit/hivemind | diff --git a/tests/utils/test_utils_account.py b/tests/utils/test_utils_account.py
index b112e11..deb0db1 100644
--- a/tests/utils/test_utils_account.py
+++ b/tests/utils/test_utils_account.py
@@ -11,8 +11,9 @@ def test_valid_account():
website='http://www.davincilife.com/',
cover_image='https://steemitimages.com/0x0/https://pbs.twimg.com/profile_banners/816255358066946050/1483447009/1500x500',
profile_image='https://www.parhlo.com/wp-content/uploads/2016/01/tmp617041537745813506.jpg',
+ version=2,
)
- account = {'name': 'foo', 'json_metadata': json.dumps(dict(profile=raw_profile))}
+ account = {'name': 'foo', 'posting_json_metadata': json.dumps(dict(profile=raw_profile))}
safe_profile = safe_profile_metadata(account)
for key, safe_value in safe_profile.items():
@@ -26,7 +27,11 @@ def test_invalid_account():
cover_image='example.com/avatar.jpg',
profile_image='https://example.com/valid-url-but-longer-than-1024-chars' + 'x' * 1024,
)
- account = {'name': 'foo', 'json_metadata': json.dumps(dict(profile=raw_profile))}
+ ignore_prof = dict(
+ name='Ignore me -- missing version:2!',
+ )
+ account = {'name': 'foo', 'json_metadata': json.dumps(dict(profile=raw_profile)),
+ 'posting_json_metadata': json.dumps(dict(profile=ignore_prof))}
safe_profile = safe_profile_metadata(account)
assert safe_profile['name'] == 'NameIsTooBigByOne...'
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "Pipfile",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-pylint",
"pytest-asyncio",
"pytest-console-scripts"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiocache==0.12.3
aiohappyeyeballs==2.6.1
aiohttp==3.11.14
aiopg @ https://github.com/aio-libs/aiopg/tarball/862fff97e4ae465333451a4af2a838bfaa3dd0bc#sha256=d7aca3fab28810b54b2e74c089a9bd498ffde54b3f94c867a146da2464654393
aiosignal==1.3.2
apply_defaults==0.1.6
astroid==3.3.9
async-timeout==5.0.1
attrs==25.3.0
certifi==2025.1.31
ConfigArgParse==1.7
coverage==7.8.0
dateparser==1.2.1
dill==0.3.9
exceptiongroup==1.2.2
frozenlist==1.5.0
funcsigs==1.0.2
funcy==2.0
future==1.0.0
git-pylint-commit-hook==2.6.1
greenlet==3.1.1
-e git+https://github.com/steemit/hivemind.git@a50ffb074b422956f9ccf79fb49eb0a8969b494e#egg=hivemind
humanize==4.12.2
idna==3.10
importlib_metadata==8.6.1
iniconfig==2.1.0
isort==6.0.1
Jinja2==3.1.6
jsonrpcserver==4.0.1
jsonschema==2.6.0
MarkupSafe==3.0.2
maya==0.6.1
mccabe==0.7.0
multidict==6.2.0
packaging==24.2
pdoc==15.0.1
pendulum==3.0.0
pep8==1.7.1
pipfile==0.0.2
platformdirs==4.3.7
pluggy==1.5.0
propcache==0.3.1
psycopg2-binary==2.9.10
Pygments==2.19.1
pylint==3.3.6
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-console-scripts==1.4.1
pytest-cov==6.0.0
pytest-pylint==0.21.0
python-dateutil==2.9.0.post0
pytz==2025.2
regex==2024.11.6
six==1.17.0
snaptime==0.2.4
SQLAlchemy==2.0.40
time-machine==2.16.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli==2.2.1
tomlkit==0.13.2
toolz==1.0.0
typing_extensions==4.13.0
tzdata==2025.2
tzlocal==5.3.1
ujson==5.10.0
urllib3==2.3.0
yapf==0.43.0
yarl==1.18.3
zipp==3.21.0
| name: hivemind
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- pipfile=0.0.2=py_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiocache==0.12.3
- aiohappyeyeballs==2.6.1
- aiohttp==3.11.14
- aiopg==0.15.0
- aiosignal==1.3.2
- apply-defaults==0.1.6
- astroid==3.3.9
- async-timeout==5.0.1
- attrs==25.3.0
- certifi==2025.1.31
- configargparse==1.7
- coverage==7.8.0
- dateparser==1.2.1
- dill==0.3.9
- exceptiongroup==1.2.2
- frozenlist==1.5.0
- funcsigs==1.0.2
- funcy==2.0
- future==1.0.0
- git-pylint-commit-hook==2.6.1
- greenlet==3.1.1
- humanize==4.12.2
- idna==3.10
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- isort==6.0.1
- jinja2==3.1.6
- jsonrpcserver==4.0.1
- jsonschema==2.6.0
- markupsafe==3.0.2
- maya==0.6.1
- mccabe==0.7.0
- multidict==6.2.0
- packaging==24.2
- pdoc==15.0.1
- pendulum==3.0.0
- pep8==1.7.1
- platformdirs==4.3.7
- pluggy==1.5.0
- propcache==0.3.1
- psycopg2-binary==2.9.10
- pygments==2.19.1
- pylint==3.3.6
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-console-scripts==1.4.1
- pytest-cov==6.0.0
- pytest-pylint==0.21.0
- python-dateutil==2.9.0.post0
- pytz==2025.2
- regex==2024.11.6
- six==1.17.0
- snaptime==0.2.4
- sqlalchemy==2.0.40
- time-machine==2.16.0
- tomli==2.2.1
- tomlkit==0.13.2
- toolz==1.0.0
- typing-extensions==4.13.0
- tzdata==2025.2
- tzlocal==5.3.1
- ujson==5.10.0
- urllib3==2.3.0
- yapf==0.43.0
- yarl==1.18.3
- zipp==3.21.0
prefix: /opt/conda/envs/hivemind
| [
"tests/utils/test_utils_account.py::test_valid_account"
] | [] | [
"tests/utils/test_utils_account.py::test_invalid_account"
] | [] | MIT License | 5,490 | 401 | [
"hive/indexer/accounts.py",
"hive/utils/account.py"
] |
|
python-pillow__Pillow-4094 | 26babb525a6a83393dcb88001ae20e603f7aa7fb | 2019-09-27 20:16:57 | 26babb525a6a83393dcb88001ae20e603f7aa7fb | diff --git a/src/PIL/SpiderImagePlugin.py b/src/PIL/SpiderImagePlugin.py
index 5663a5b53..f1cae4d9f 100644
--- a/src/PIL/SpiderImagePlugin.py
+++ b/src/PIL/SpiderImagePlugin.py
@@ -236,7 +236,7 @@ def loadImageSeries(filelist=None):
def makeSpiderHeader(im):
nsam, nrow = im.size
lenbyt = nsam * 4 # There are labrec records in the header
- labrec = 1024 / lenbyt
+ labrec = int(1024 / lenbyt)
if 1024 % lenbyt != 0:
labrec += 1
labbyt = labrec * lenbyt
| Spider format header bug for random image size
### What did you do?
I am trying to save `float` image in `SPIDER` format
### What did you expect to happen?
I accepted it to work for any image size.
### What actually happened?
The saved spider file has corrupt header when the width of the image is not a power of 2.
### What are your OS, Python and Pillow versions?
* OS: Windows 10
* Python: 3.6
* Pillow: 6.1.1
```python
from PIL import Image
import numpy as np
filename= "im.spi"
width = 100 # change it to any natural number not a power of 2
im = np.random.random((64, width))-1
pil_image = Image.fromarray(im, 'F')
pil_image.save(filename, format='SPIDER')
im2 = Image.open(filename)
print(im2.size)
```
I was able to trace the bug to [this](https://github.com/python-pillow/Pillow/blob/master/src/PIL/SpiderImagePlugin.py#L239) line.
Changing the line from `labrec = 1024 / lenbyt` to `labrec = int(1024 / lenbyt)` fixes the bug for me.
| python-pillow/Pillow | diff --git a/Tests/test_file_spider.py b/Tests/test_file_spider.py
index 8a1eb6637..340208486 100644
--- a/Tests/test_file_spider.py
+++ b/Tests/test_file_spider.py
@@ -1,4 +1,5 @@
import tempfile
+from io import BytesIO
from PIL import Image, ImageSequence, SpiderImagePlugin
@@ -117,3 +118,14 @@ class TestImageSpider(PillowTestCase):
for i, frame in enumerate(ImageSequence.Iterator(im)):
if i > 1:
self.fail("Non-stack DOS file test failed")
+
+ # for issue #4093
+ def test_odd_size(self):
+ data = BytesIO()
+ width = 100
+ im = Image.new("F", (width, 64))
+ im.save(data, format="SPIDER")
+
+ data.seek(0)
+ im2 = Image.open(data)
+ self.assert_image_equal(im, im2)
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 6.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": [
"apt-get update",
"apt-get install -y gcc libjpeg-dev zlib1g-dev libtiff5-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev libharfbuzz-dev libfribidi-dev libxcb1-dev"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
babel==2.17.0
black==25.1.0
blessed==1.20.0
build==1.2.2.post1
certifi==2025.1.31
charset-normalizer==3.4.1
check-manifest==0.50
click==8.1.8
coverage==7.8.0
coveralls==4.0.1
docopt==0.6.2
docutils==0.21.2
exceptiongroup==1.2.2
idna==3.10
imagesize==1.4.1
importlib_metadata==8.6.1
iniconfig==2.1.0
jarn.viewdoc==2.7
Jinja2==3.1.6
MarkupSafe==3.0.2
mypy-extensions==1.0.0
olefile==0.47
packaging==24.2
pathspec==0.12.1
-e git+https://github.com/python-pillow/Pillow.git@26babb525a6a83393dcb88001ae20e603f7aa7fb#egg=Pillow
platformdirs==4.3.7
pluggy==1.5.0
pycodestyle==2.13.0
pyflakes==3.3.1
Pygments==2.19.1
pyproject_hooks==1.2.0
pyroma==4.2
pytest==8.3.5
pytest-cov==6.0.0
requests==2.32.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinx-rtd-theme==3.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tomli==2.2.1
trove-classifiers==2025.3.19.19
typing_extensions==4.13.0
urllib3==2.3.0
wcwidth==0.2.13
zipp==3.21.0
| name: Pillow
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- babel==2.17.0
- black==25.1.0
- blessed==1.20.0
- build==1.2.2.post1
- certifi==2025.1.31
- charset-normalizer==3.4.1
- check-manifest==0.50
- click==8.1.8
- coverage==7.8.0
- coveralls==4.0.1
- docopt==0.6.2
- docutils==0.21.2
- exceptiongroup==1.2.2
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- jarn-viewdoc==2.7
- jinja2==3.1.6
- markupsafe==3.0.2
- mypy-extensions==1.0.0
- olefile==0.47
- packaging==24.2
- pathspec==0.12.1
- platformdirs==4.3.7
- pluggy==1.5.0
- pycodestyle==2.13.0
- pyflakes==3.3.1
- pygments==2.19.1
- pyproject-hooks==1.2.0
- pyroma==4.2
- pytest==8.3.5
- pytest-cov==6.0.0
- requests==2.32.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==7.4.7
- sphinx-rtd-theme==3.0.2
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- tomli==2.2.1
- trove-classifiers==2025.3.19.19
- typing-extensions==4.13.0
- urllib3==2.3.0
- wcwidth==0.2.13
- zipp==3.21.0
prefix: /opt/conda/envs/Pillow
| [
"Tests/test_file_spider.py::TestImageSpider::test_odd_size"
] | [] | [
"Tests/test_file_spider.py::TestImageSpider::test_invalid_file",
"Tests/test_file_spider.py::TestImageSpider::test_isInt_not_a_number",
"Tests/test_file_spider.py::TestImageSpider::test_isSpiderImage",
"Tests/test_file_spider.py::TestImageSpider::test_loadImageSeries",
"Tests/test_file_spider.py::TestImageSpider::test_loadImageSeries_no_input",
"Tests/test_file_spider.py::TestImageSpider::test_n_frames",
"Tests/test_file_spider.py::TestImageSpider::test_nonstack_dos",
"Tests/test_file_spider.py::TestImageSpider::test_nonstack_file",
"Tests/test_file_spider.py::TestImageSpider::test_sanity",
"Tests/test_file_spider.py::TestImageSpider::test_save",
"Tests/test_file_spider.py::TestImageSpider::test_tell",
"Tests/test_file_spider.py::TestImageSpider::test_tempfile",
"Tests/test_file_spider.py::TestImageSpider::test_unclosed_file"
] | [] | MIT-CMU License | 5,491 | 194 | [
"src/PIL/SpiderImagePlugin.py"
] |
|
pre-commit__pre-commit-hooks-415 | 277f875bc517bbde5f767dcb0919e762c33759a9 | 2019-09-28 18:44:20 | 8884b6365b2bd8a0fdab1dfd80938a28141e7f54 | diff --git a/pre_commit_hooks/requirements_txt_fixer.py b/pre_commit_hooks/requirements_txt_fixer.py
index 4575975..eff7935 100644
--- a/pre_commit_hooks/requirements_txt_fixer.py
+++ b/pre_commit_hooks/requirements_txt_fixer.py
@@ -40,11 +40,16 @@ class Requirement(object):
def fix_requirements(f): # type: (IO[bytes]) -> int
requirements = [] # type: List[Requirement]
- before = tuple(f)
+ before = list(f)
after = [] # type: List[bytes]
before_string = b''.join(before)
+ # adds new line in case one is missing
+ # AND a change to the requirements file is needed regardless:
+ if before and not before[-1].endswith(b'\n'):
+ before[-1] += b'\n'
+
# If the file is empty (i.e. only whitespace/newlines) exit early
if before_string.strip() == b'':
return PASS
| requirements-txt-fixer corrupts file if no new line at end
a file with
```
a
b
c
```
(no new line)
will get "fixed" to:
```
a
bc
```
I was able to debug and identify the issue, I'll be happy to also PR a fix but I was wondering if there is really a use for it? I've not seen any input without a new line at the end in the tests cases and as a work around I can add `end-of-file-fixer` before it.
| pre-commit/pre-commit-hooks | diff --git a/tests/requirements_txt_fixer_test.py b/tests/requirements_txt_fixer_test.py
index c7c6e47..2e2eab6 100644
--- a/tests/requirements_txt_fixer_test.py
+++ b/tests/requirements_txt_fixer_test.py
@@ -15,6 +15,9 @@ from pre_commit_hooks.requirements_txt_fixer import Requirement
(b'foo\n# comment at end\n', PASS, b'foo\n# comment at end\n'),
(b'foo\nbar\n', FAIL, b'bar\nfoo\n'),
(b'bar\nfoo\n', PASS, b'bar\nfoo\n'),
+ (b'a\nc\nb\n', FAIL, b'a\nb\nc\n'),
+ (b'a\nc\nb', FAIL, b'a\nb\nc\n'),
+ (b'a\nb\nc', FAIL, b'a\nb\nc\n'),
(
b'#comment1\nfoo\n#comment2\nbar\n',
FAIL,
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | 2.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements-dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
cfgv==3.3.1
coverage==6.2
distlib==0.3.9
filelock==3.4.1
flake8==5.0.4
identify==2.4.4
importlib-metadata==4.2.0
importlib-resources==5.2.3
iniconfig==1.1.1
mccabe==0.7.0
mock==5.2.0
nodeenv==1.6.0
packaging==21.3
platformdirs==2.4.0
pluggy==1.0.0
pre-commit==2.17.0
-e git+https://github.com/pre-commit/pre-commit-hooks.git@277f875bc517bbde5f767dcb0919e762c33759a9#egg=pre_commit_hooks
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
pyparsing==3.1.4
pytest==7.0.1
PyYAML==6.0.1
ruamel.yaml==0.18.3
ruamel.yaml.clib==0.2.8
six==1.17.0
toml==0.10.2
tomli==1.2.3
typing_extensions==4.1.1
virtualenv==20.16.2
zipp==3.6.0
| name: pre-commit-hooks
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- cfgv==3.3.1
- coverage==6.2
- distlib==0.3.9
- filelock==3.4.1
- flake8==5.0.4
- identify==2.4.4
- importlib-metadata==4.2.0
- importlib-resources==5.2.3
- iniconfig==1.1.1
- mccabe==0.7.0
- mock==5.2.0
- nodeenv==1.6.0
- packaging==21.3
- platformdirs==2.4.0
- pluggy==1.0.0
- pre-commit==2.17.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pyyaml==6.0.1
- ruamel-yaml==0.18.3
- ruamel-yaml-clib==0.2.8
- six==1.17.0
- toml==0.10.2
- tomli==1.2.3
- typing-extensions==4.1.1
- virtualenv==20.16.2
- zipp==3.6.0
prefix: /opt/conda/envs/pre-commit-hooks
| [
"tests/requirements_txt_fixer_test.py::test_integration[a\\nc\\nb-1-a\\nb\\nc\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[a\\nb\\nc-1-a\\nb\\nc\\n]"
] | [] | [
"tests/requirements_txt_fixer_test.py::test_integration[-0-]",
"tests/requirements_txt_fixer_test.py::test_integration[\\n-0-\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[#",
"tests/requirements_txt_fixer_test.py::test_integration[foo\\n#",
"tests/requirements_txt_fixer_test.py::test_integration[foo\\nbar\\n-1-bar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[bar\\nfoo\\n-0-bar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[a\\nc\\nb\\n-1-a\\nb\\nc\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[#comment1\\nfoo\\n#comment2\\nbar\\n-1-#comment2\\nbar\\n#comment1\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[#comment1\\nbar\\n#comment2\\nfoo\\n-0-#comment1\\nbar\\n#comment2\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[#comment\\n\\nfoo\\nbar\\n-1-#comment\\n\\nbar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[#comment\\n\\nbar\\nfoo\\n-0-#comment\\n\\nbar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[\\nfoo\\nbar\\n-1-bar\\n\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[\\nbar\\nfoo\\n-0-\\nbar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[pyramid==1\\npyramid-foo==2\\n-0-pyramid==1\\npyramid-foo==2\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[ocflib\\nDjango\\nPyMySQL\\n-1-Django\\nocflib\\nPyMySQL\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[-e",
"tests/requirements_txt_fixer_test.py::test_integration[bar\\npkg-resources==0.0.0\\nfoo\\n-1-bar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_integration[foo\\npkg-resources==0.0.0\\nbar\\n-1-bar\\nfoo\\n]",
"tests/requirements_txt_fixer_test.py::test_requirement_object"
] | [] | MIT License | 5,495 | 243 | [
"pre_commit_hooks/requirements_txt_fixer.py"
] |
|
iterative__dvc-2550 | 25ff08fdc8046d9a7e609c5c32e08d4919c02f9f | 2019-09-29 15:39:08 | 22742618081c6b8bf8ac672459dc59654f2d15f0 | efiop: @Suor Please take a look. | diff --git a/dvc/analytics.py b/dvc/analytics.py
index e4b0dc425..95fda9b9a 100644
--- a/dvc/analytics.py
+++ b/dvc/analytics.py
@@ -10,6 +10,7 @@ import errno
import logging
from dvc import __version__
+from dvc.utils import env2bool
logger = logging.getLogger(__name__)
@@ -229,7 +230,7 @@ class Analytics(object):
def is_enabled(cmd=None):
from dvc.command.daemon import CmdDaemonBase
- if os.getenv("DVC_TEST"):
+ if env2bool("DVC_TEST"):
return False
if isinstance(cmd, CmdDaemonBase):
diff --git a/dvc/lock.py b/dvc/lock.py
index 3ab223adf..eebd8db2e 100644
--- a/dvc/lock.py
+++ b/dvc/lock.py
@@ -22,7 +22,38 @@ class LockError(DvcException):
if is_py3:
+ import socket
+
import flufl.lock
+ from funcy import monkey
+
+ # Workaround for slow and obsoleted gethostbyaddr used in
+ # `socket.getfqdn()`. See [1], [2] and [3] for more info.
+ #
+ # [1] https://bugs.python.org/issue5004
+ # [2] https://github.com/iterative/dvc/issues/2582
+ # [3] https://gitlab.com/warsaw/flufl.lock/merge_requests/12
+ @monkey(socket)
+ def getfqdn(name=""):
+ """Get fully qualified domain name from name.
+
+ An empty argument is interpreted as meaning the local host.
+ """
+ name = name.strip()
+ if not name or name == "0.0.0.0":
+ name = socket.gethostname()
+ try:
+ addrs = socket.getaddrinfo(
+ name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME
+ )
+ except socket.error:
+ pass
+ else:
+ for addr in addrs:
+ if addr[3]:
+ name = addr[3]
+ break
+ return name
class Lock(flufl.lock.Lock):
"""Class for dvc repo lock.
diff --git a/dvc/progress.py b/dvc/progress.py
index f64459601..588211eb8 100644
--- a/dvc/progress.py
+++ b/dvc/progress.py
@@ -1,9 +1,11 @@
"""Manages progress bars for dvc repo."""
from __future__ import print_function
import logging
+import sys
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor
from funcy import merge
+from dvc.utils import env2bool
logger = logging.getLogger(__name__)
@@ -54,6 +56,7 @@ class Tqdm(tqdm):
leave=False,
bar_format=None,
bytes=False, # pylint: disable=W0622
+ file=None,
**kwargs
):
"""
@@ -62,6 +65,9 @@ class Tqdm(tqdm):
desc : persists after `close()`
level : effective logging level for determining `disable`;
used only if `disable` is unspecified
+ disable : If (default: None), will be determined by logging level.
+ May be overridden to `True` due to non-TTY status.
+ Skip override by specifying env var `DVC_IGNORE_ISATTY`.
kwargs : anything accepted by `tqdm.tqdm()`
"""
kwargs = kwargs.copy()
@@ -71,9 +77,19 @@ class Tqdm(tqdm):
unit="B", unit_scale=True, unit_divisor=1024, miniters=1
)
kwargs = merge(bytes_defaults, kwargs)
+ if file is None:
+ file = sys.stderr
self.desc_persist = desc
+ # auto-disable based on `logger.level`
if disable is None:
disable = logger.getEffectiveLevel() > level
+ # auto-disable based on TTY
+ if (
+ not disable
+ and not env2bool("DVC_IGNORE_ISATTY")
+ and hasattr(file, "isatty")
+ ):
+ disable = file.isatty()
super(Tqdm, self).__init__(
iterable=iterable,
disable=disable,
diff --git a/dvc/remote/local.py b/dvc/remote/local.py
index 5c73a4729..384077180 100644
--- a/dvc/remote/local.py
+++ b/dvc/remote/local.py
@@ -1,15 +1,14 @@
from __future__ import unicode_literals
-
-from dvc.scheme import Schemes
-from dvc.utils.compat import str, fspath_py35, open
-
import os
import stat
import errno
-from shortuuid import uuid
import logging
from functools import partial
+from shortuuid import uuid
+
+from dvc.scheme import Schemes
+from dvc.utils.compat import str, fspath_py35, open
from dvc.system import System
from dvc.remote.base import (
RemoteBASE,
diff --git a/dvc/system.py b/dvc/system.py
index c7c7cf8e4..eaa3deaf2 100644
--- a/dvc/system.py
+++ b/dvc/system.py
@@ -1,11 +1,10 @@
from __future__ import unicode_literals
-
-from dvc.utils.compat import str, open, fspath
-
import os
import errno
import shutil
+from dvc.utils.compat import str, open, fspath
+
class System(object):
@staticmethod
diff --git a/dvc/updater.py b/dvc/updater.py
index fc1e8d58e..21251e4da 100644
--- a/dvc/updater.py
+++ b/dvc/updater.py
@@ -9,7 +9,7 @@ from packaging import version
from dvc import __version__
from dvc.lock import Lock, LockError
-from dvc.utils import is_binary, boxify
+from dvc.utils import is_binary, boxify, env2bool
logger = logging.getLogger(__name__)
@@ -45,7 +45,7 @@ class Updater(object): # pragma: no cover
logger.debug(msg.format(self.lock.lockfile, action))
def check(self):
- if os.getenv("CI") or os.getenv("DVC_TEST"):
+ if os.getenv("CI") or env2bool("DVC_TEST"):
return
self._with_lock(self._check, "checking")
diff --git a/dvc/utils/__init__.py b/dvc/utils/__init__.py
index 594d6cce6..538b7a97e 100644
--- a/dvc/utils/__init__.py
+++ b/dvc/utils/__init__.py
@@ -462,6 +462,16 @@ def relpath(path, start=os.curdir):
return os.path.relpath(path, start)
+def env2bool(var, undefined=False):
+ """
+ undefined: return value if env var is unset
+ """
+ var = os.getenv(var, None)
+ if var is None:
+ return undefined
+ return bool(re.search("1|y|yes|true", var, flags=re.I))
+
+
def resolve_output(inp, out):
from dvc.utils.compat import urlparse
diff --git a/dvc/version.py b/dvc/version.py
index 2b88ecb00..84b1f416e 100644
--- a/dvc/version.py
+++ b/dvc/version.py
@@ -7,7 +7,7 @@ import os
import subprocess
-_BASE_VERSION = "0.63.2"
+_BASE_VERSION = "0.63.3"
def _generate_version(base_version):
| suppress progress when `isatty`
from https://github.com/iterative/dvc/issues/2539#issuecomment-536311782, check `isatty` before attempting to print progressbars | iterative/dvc | diff --git a/tests/__main__.py b/tests/__main__.py
index 2068d7f7d..ba115ae68 100644
--- a/tests/__main__.py
+++ b/tests/__main__.py
@@ -7,7 +7,7 @@ REPO_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
os.chdir(REPO_ROOT)
os.putenv(
- "PATH", "{}:{}".format(os.path.join(REPO_ROOT, "bin"), os.getenv("PATH"))
+ "PATH", ":".join([os.path.join(REPO_ROOT, "bin"), os.getenv("PATH")])
)
os.putenv("DVC_HOME", REPO_ROOT)
diff --git a/tests/conftest.py b/tests/conftest.py
index e9855056d..9ca00335f 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -13,6 +13,8 @@ from .basic_env import TestDirFixture, TestDvcGitFixture, TestGitFixture
# Prevent updater and analytics from running their processes
os.environ[cast_bytes_py2("DVC_TEST")] = cast_bytes_py2("true")
+# Ensure progress output even when not outputting to raw sys.stderr console
+os.environ[cast_bytes_py2("DVC_IGNORE_ISATTY")] = cast_bytes_py2("true")
@pytest.fixture(autouse=True)
diff --git a/tests/func/test_data_cloud.py b/tests/func/test_data_cloud.py
index 11c9eceed..6633a4f81 100644
--- a/tests/func/test_data_cloud.py
+++ b/tests/func/test_data_cloud.py
@@ -13,6 +13,7 @@ import pytest
from mock import patch
from dvc.utils.compat import str
+from dvc.utils import env2bool
from dvc.main import main
from dvc.config import Config
from dvc.data_cloud import DataCloud
@@ -57,11 +58,9 @@ os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = TEST_GCP_CREDS_FILE
def _should_test_aws():
- dvc_test_aws = os.getenv("DVC_TEST_AWS")
- if dvc_test_aws == "true":
- return True
- elif dvc_test_aws == "false":
- return False
+ do_test = env2bool("DVC_TEST_AWS", undefined=None)
+ if do_test is not None:
+ return do_test
if os.getenv("AWS_ACCESS_KEY_ID") and os.getenv("AWS_SECRET_ACCESS_KEY"):
return True
@@ -70,8 +69,9 @@ def _should_test_aws():
def _should_test_gcp():
- if os.getenv("DVC_TEST_GCP") == "true":
- return True
+ do_test = env2bool("DVC_TEST_GCP", undefined=None)
+ if do_test is not None:
+ return do_test
if not os.path.exists(TEST_GCP_CREDS_FILE):
return False
@@ -92,10 +92,9 @@ def _should_test_gcp():
def _should_test_azure():
- if os.getenv("DVC_TEST_AZURE") == "true":
- return True
- elif os.getenv("DVC_TEST_AZURE") == "false":
- return False
+ do_test = env2bool("DVC_TEST_AZURE", undefined=None)
+ if do_test is not None:
+ return do_test
return os.getenv("AZURE_STORAGE_CONTAINER_NAME") and os.getenv(
"AZURE_STORAGE_CONNECTION_STRING"
@@ -103,10 +102,9 @@ def _should_test_azure():
def _should_test_oss():
- if os.getenv("DVC_TEST_OSS") == "true":
- return True
- elif os.getenv("DVC_TEST_OSS") == "false":
- return False
+ do_test = env2bool("DVC_TEST_OSS", undefined=None)
+ if do_test is not None:
+ return do_test
return (
os.getenv("OSS_ENDPOINT")
@@ -116,8 +114,9 @@ def _should_test_oss():
def _should_test_ssh():
- if os.getenv("DVC_TEST_SSH") == "true":
- return True
+ do_test = env2bool("DVC_TEST_SSH", undefined=None)
+ if do_test is not None:
+ return do_test
# FIXME: enable on windows
if os.name == "nt":
diff --git a/tests/unit/test_progress.py b/tests/unit/test_progress.py
index 8149300c0..36ba35e44 100644
--- a/tests/unit/test_progress.py
+++ b/tests/unit/test_progress.py
@@ -1,17 +1,37 @@
import logging
from dvc.progress import Tqdm
+from dvc.utils import env2bool
+import sys
+import mock
-def test_quiet(caplog, capsys):
+def test_quiet_logging(caplog, capsys):
with caplog.at_level(logging.CRITICAL, logger="dvc"):
for _ in Tqdm(range(10)):
pass
out_err = capsys.readouterr()
assert out_err.out == ""
assert out_err.err == ""
+
+
+def test_quiet_notty(caplog, capsys):
with caplog.at_level(logging.INFO, logger="dvc"):
for _ in Tqdm(range(10)):
pass
out_err = capsys.readouterr()
assert out_err.out == ""
- assert "0/10" in out_err.err
+ if env2bool("DVC_IGNORE_ISATTY"):
+ assert "0/10" in out_err.err
+ else:
+ assert out_err.err == ""
+
+
+def test_default(caplog, capsys):
+ with caplog.at_level(logging.INFO, logger="dvc"):
+ # simulate interactive terminal
+ with mock.patch.object(sys.stderr, "isatty", return_value=True):
+ for _ in Tqdm(range(10)):
+ pass
+ out_err = capsys.readouterr()
+ assert out_err.out == ""
+ assert "0/10" in out_err.err
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 8
} | 0.63 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | null | null | [
"tests/unit/test_progress.py::test_quiet_logging",
"tests/unit/test_progress.py::test_default",
"tests/unit/test_progress.py::test_quiet_notty"
] | [
"tests/func/test_add.py::TestAdd::test",
"tests/func/test_cli.py::TestAdd::test",
"tests/func/test_cache.py::TestSharedCacheDir::test",
"tests/func/test_cli.py::TestGC::test",
"tests/func/test_analytics.py::TestAnalyticsGit::test_send",
"tests/func/test_add.py::TestAdd::test_unicode",
"tests/func/test_add.py::TestAddUnupportedFile::test",
"tests/func/test_cache.py::TestCacheLinkType::test",
"tests/func/test_cli.py::TestGCMultipleDvcRepos::test",
"tests/func/test_add.py::TestAddDirectory::test",
"tests/func/test_analytics.py::TestAnalyticsGit::test_send_failed",
"tests/func/test_cache.py::TestCmdCacheDir::test",
"tests/func/test_cli.py::TestConfig::test",
"tests/func/test_analytics.py::TestAnalyticsDvc::test",
"tests/func/test_add.py::TestAddDirectoryRecursive::test",
"tests/func/test_cli.py::TestCheckout::test",
"tests/func/test_cache.py::TestCmdCacheDir::test_abs_path",
"tests/func/test_analytics.py::TestAnalyticsDvc::test_send",
"tests/func/test_add.py::TestAddCmdDirectoryRecursive::test",
"tests/func/test_cli.py::TestFindRoot::test",
"tests/func/test_cache.py::TestCmdCacheDir::test_relative_path",
"tests/func/test_analytics.py::TestAnalyticsDvc::test_send_disabled",
"tests/func/test_analytics.py::TestAnalyticsDvc::test_send_failed",
"tests/func/test_cache.py::TestShouldCacheBeReflinkOrCopyByDefault::test",
"tests/func/test_add.py::TestAddCmdDirectoryRecursive::test_warn_about_large_directories",
"tests/func/test_add.py::TestAddDirectoryWithForwardSlash::test",
"tests/func/test_add.py::TestAddDirWithExistingCache::test",
"tests/func/test_config.py::TestConfigCLI::test",
"tests/func/test_checkout.py::TestCheckout::test",
"tests/func/test_add.py::TestAddModifiedDir::test",
"tests/func/test_config.py::TestConfigCLI::test_local",
"tests/func/test_config.py::TestConfigCLI::test_non_existing",
"tests/func/test_checkout.py::TestCheckoutSingleStage::test",
"tests/func/test_config.py::TestConfigCLI::test_root",
"tests/func/test_add.py::TestAddExternalLocalFile::test",
"tests/func/test_add.py::TestAddLocalRemoteFile::test",
"tests/func/test_data_cloud.py::TestDataCloud::test",
"tests/func/test_checkout.py::TestCheckoutCorruptedCacheFile::test",
"tests/func/test_checkout.py::TestCheckoutCorruptedCacheDir::test",
"tests/func/test_add.py::TestCmdAdd::test",
"tests/func/test_data_cloud.py::TestDataCloudBase::test",
"tests/func/test_data_cloud.py::TestRemoteS3::test",
"tests/func/test_checkout.py::TestCmdCheckout::test",
"tests/func/test_data_cloud.py::TestRemoteGS::test",
"tests/func/test_add.py::TestDoubleAddUnchanged::test_dir",
"tests/func/test_add.py::TestDoubleAddUnchanged::test_file",
"tests/func/test_checkout.py::TestRemoveFilesWhenCheckout::test",
"tests/func/test_data_cloud.py::TestRemoteAZURE::test",
"tests/func/test_add.py::TestShouldUpdateStateEntryForFileAfterAdd::test",
"tests/func/test_checkout.py::TestCheckoutCleanWorkingDir::test",
"tests/func/test_data_cloud.py::TestRemoteOSS::test",
"tests/func/test_add.py::TestShouldUpdateStateEntryForDirectoryAfterAdd::test",
"tests/func/test_data_cloud.py::TestRemoteLOCAL::test",
"tests/func/test_checkout.py::TestCheckoutCleanWorkingDir::test_force",
"tests/func/test_add.py::TestAddCommit::test",
"tests/func/test_data_cloud.py::TestRemoteSSH::test",
"tests/func/test_add.py::TestShouldCollectDirCacheOnlyOnce::test",
"tests/func/test_add.py::TestShouldAddDataFromExternalSymlink::test",
"tests/func/test_data_cloud.py::TestRemoteSSHMocked::test",
"tests/func/test_checkout.py::TestCheckoutSelectiveRemove::test",
"tests/func/test_add.py::TestShouldAddDataFromInternalSymlink::test",
"tests/func/test_data_cloud.py::TestRemoteHDFS::test",
"tests/func/test_add.py::TestShouldPlaceStageInDataDirIfRepositoryBelowSymlink::test",
"tests/func/test_checkout.py::TestGitIgnoreBasic::test",
"tests/func/test_checkout.py::TestGitIgnoreWhenCheckout::test",
"tests/func/test_add.py::TestShouldThrowProperExceptionOnCorruptedStageFile::test",
"tests/func/test_data_cloud.py::TestDataCloudCLIBase::test",
"tests/func/test_add.py::TestAddFilename::test",
"tests/func/test_data_cloud.py::TestRemoteLOCALCLI::test",
"tests/func/test_checkout.py::TestCheckoutMissingMd5InStageFile::test",
"tests/func/test_checkout.py::TestCheckoutEmptyDir::test",
"tests/func/test_data_cloud.py::TestRemoteSSHCLI::test",
"tests/func/test_checkout.py::TestCheckoutNotCachedFile::test",
"tests/func/test_add.py::TestShouldNotTrackGitInternalFiles::test",
"tests/func/test_data_cloud.py::TestRemoteHDFSCLI::test",
"tests/func/test_add.py::TestAddUnprotected::test",
"tests/func/test_data_cloud.py::TestRemoteS3CLI::test",
"tests/func/test_data_cloud.py::TestRemoteGSCLI::test",
"tests/func/test_data_cloud.py::TestRemoteAZURECLI::test",
"tests/func/test_checkout.py::TestCheckoutWithDeps::test",
"tests/func/test_data_cloud.py::TestRemoteOSSCLI::test",
"tests/func/test_data_cloud.py::TestDataCloudErrorCLI::test_error",
"tests/func/test_checkout.py::TestCheckoutDirectory::test",
"tests/func/test_checkout.py::TestCheckoutHook::test",
"tests/func/test_checkout.py::TestCheckoutSuggestGit::test",
"tests/func/test_data_cloud.py::TestWarnOnOutdatedStage::test",
"tests/func/test_checkout.py::TestCheckoutTargetRecursiveShouldNotRemoveOtherUsedFiles::test",
"tests/func/test_data_cloud.py::TestRecursiveSyncOperations::test",
"tests/func/test_checkout.py::TestCheckoutRecursiveNotDirectory::test",
"tests/func/test_data_cloud.py::TestCheckSumRecalculation::test",
"tests/func/test_data_cloud.py::TestShouldWarnOnNoChecksumInLocalAndRemoteCache::test",
"tests/func/test_checkout.py::TestCheckoutMovedCacheDirWithSymlinks::test",
"tests/func/test_analytics.py::TestAnalytics::test",
"tests/func/test_analytics.py::TestAnalytics::test_send",
"tests/func/test_analytics.py::TestAnalytics::test_send_failed",
"tests/func/test_analytics.py::TestAnalyticsGit::test",
"tests/func/test_cache.py::TestCache::test_all",
"tests/func/test_cache.py::TestCache::test_get",
"tests/func/test_cache.py::TestCacheLoadBadDirCache::test",
"tests/func/test_cli.py::TestArgParse::test",
"tests/func/test_import_url.py::TestImportFilename::test",
"tests/func/test_cache.py::TestExternalCacheDir::test",
"tests/func/test_cli.py::TestRun::test",
"tests/func/test_cli.py::TestPull::test",
"tests/func/test_cache.py::TestExternalCacheDir::test_remote_references",
"tests/func/test_cli.py::TestPush::test",
"tests/func/test_cli.py::TestStatus::test",
"tests/func/test_cli.py::TestRepro::test",
"tests/func/test_diff.py::TestDiff::test",
"tests/func/test_cli.py::TestRemove::test",
"tests/func/test_init.py::TestInit::test_api",
"tests/func/test_diff.py::TestDiffRepo::test",
"tests/func/test_init.py::TestInit::test_cli",
"tests/func/test_diff.py::TestDiffCmdLine::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_dot_outs",
"tests/func/test_init.py::TestDoubleInit::test",
"tests/func/test_diff.py::TestDiffDir::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_outs",
"tests/func/test_diff.py::TestDiffDirRepo::test",
"tests/func/test_init.py::TestDoubleInit::test_api",
"tests/func/test_pipeline.py::TestDvcRepoPipeline::test_no_stages",
"tests/func/test_remote.py::TestRemote::test",
"tests/func/test_init.py::TestDoubleInit::test_cli",
"tests/func/test_diff.py::TestDiffDirRepoDeletedFile::test",
"tests/func/test_remote.py::TestRemote::test_overwrite",
"tests/func/test_init.py::TestInitNoSCMFail::test_cli",
"tests/func/test_diff.py::TestDiffFileNotFound::test",
"tests/func/test_init.py::TestInitNoSCM::test_api",
"tests/func/test_remote.py::TestRemote::test_referencing_other_remotes",
"tests/func/test_diff.py::TestDiffModifiedFile::test",
"tests/func/test_remote.py::TestRemote::test_relative_path",
"tests/func/test_remote.py::TestRemoteRemoveDefault::test",
"tests/func/test_diff.py::TestDiffDirWithFile::test",
"tests/func/test_init.py::TestInitNoSCM::test_cli",
"tests/func/test_remote.py::TestRemoteRemove::test",
"tests/func/test_remote.py::TestRemoteDefault::test",
"tests/func/test_init.py::test_init_quiet_should_not_display_welcome_screen",
"tests/func/test_diff.py::TestDiffCmdMessage::test",
"tests/func/test_remote.py::TestRemoteShouldHandleUppercaseRemoteName::test",
"tests/func/test_gc.py::TestGC::test_api",
"tests/func/test_pipeline.py::TestReproChangedDeepData::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test",
"tests/func/test_remove.py::TestRemove::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_ascii_outs",
"tests/func/test_remove.py::TestRemoveNonExistentFile::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_commands",
"tests/func/test_gc.py::TestGC::test_cli",
"tests/func/test_remove.py::TestRemoveBrokenSymlink::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_dot",
"tests/func/test_remove.py::TestRemoveDirectory::test",
"tests/func/test_gc.py::TestGCBranchesTags::test",
"tests/func/test_remove.py::TestCmdRemove::test",
"tests/func/test_lock.py::TestLock::test_cli",
"tests/func/test_lock.py::TestLock::test_with",
"tests/func/test_remove.py::TestRemovePurge::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_dot_commands",
"tests/func/test_remove.py::TestRemovePurge::test_force",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_dot_outs",
"tests/func/test_gc.py::TestGCMultipleDvcRepos::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_non_existing",
"tests/func/test_repo.py::TestCollect::test",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_not_dvc_file",
"tests/func/test_metrics.py::TestMetrics::test_formatted_output",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_outs",
"tests/func/test_repo.py::TestIgnore::test_should_not_gather_stage_files_from_ignored_dir",
"tests/func/test_metrics.py::TestMetrics::test_show",
"tests/func/test_pipeline.py::TestPipelineShowSingle::test_tree",
"tests/func/test_metrics.py::TestMetrics::test_show_all_should_be_current_dir_agnostic",
"tests/func/test_repro.py::TestReproFail::test",
"tests/func/test_metrics.py::TestMetrics::test_type_case_normalized",
"tests/func/test_repro.py::TestReproCyclicGraph::test",
"tests/func/test_repro.py::TestReproWorkingDirectoryAsOutput::test",
"tests/func/test_metrics.py::TestMetrics::test_unknown_type_ignored",
"tests/func/test_pipeline.py::TestPipelineShow::test",
"tests/func/test_repro.py::TestReproWorkingDirectoryAsOutput::test_nested",
"tests/func/test_pipeline.py::TestPipelineShow::test_ascii",
"tests/func/test_repro.py::TestReproWorkingDirectoryAsOutput::test_similar_paths",
"tests/func/test_repro.py::TestReproDepUnderDir::test",
"tests/func/test_pipeline.py::TestPipelineShow::test_ascii_commands",
"tests/func/test_repro.py::TestReproDepDirWithOutputsUnderIt::test",
"tests/func/test_metrics.py::TestMetrics::test_xpath_all",
"tests/func/test_pipeline.py::TestPipelineShow::test_ascii_outs",
"tests/func/test_metrics.py::TestMetrics::test_xpath_all_columns",
"tests/func/test_repro.py::TestReproNoDeps::test",
"tests/func/test_pipeline.py::TestPipelineShow::test_commands",
"tests/func/test_repro.py::TestReproForce::test",
"tests/func/test_pipeline.py::TestPipelineShow::test_dot",
"tests/func/test_metrics.py::TestMetrics::test_xpath_all_rows",
"tests/func/test_repro.py::TestReproChangedCode::test",
"tests/func/test_pipeline.py::TestPipelineShow::test_dot_commands",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir",
"tests/func/test_metrics.py::TestMetrics::test_xpath_all_with_header",
"tests/func/test_repro.py::TestReproChangedData::test",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir_unicode",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_parent_dir",
"tests/func/test_metrics.py::TestMetrics::test_xpath_is_empty",
"tests/func/test_repro.py::TestReproDry::test",
"tests/func/test_metrics.py::TestMetrics::test_xpath_is_none",
"tests/func/test_pipeline.py::TestPipelineShowOuts::test_outs",
"tests/func/test_metrics.py::TestMetricsRecursive::test",
"tests/func/test_repro.py::TestReproUpToDate::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test",
"tests/func/test_metrics.py::TestMetricsReproCLI::test",
"tests/func/test_repro.py::TestReproDryNoExec::test",
"tests/func/test_metrics.py::TestMetricsReproCLI::test_binary",
"tests/func/test_repro.py::TestReproChangedDeepData::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_ascii",
"tests/func/test_metrics.py::TestMetricsReproCLI::test_dir",
"tests/func/test_repro.py::TestReproIgnoreBuildCache::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_ascii_commands",
"tests/func/test_metrics.py::TestMetricsCLI::test",
"tests/func/test_metrics.py::TestMetricsCLI::test_binary",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_ascii_outs",
"tests/func/test_repro.py::TestReproPipeline::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_commands",
"tests/func/test_repro.py::TestReproPipeline::test_cli",
"tests/func/test_metrics.py::TestMetricsCLI::test_dir",
"tests/func/test_repro.py::TestReproPipelines::test",
"tests/func/test_metrics.py::TestMetricsCLI::test_non_existing",
"tests/func/test_import_url.py::TestCmdImport::test",
"tests/func/test_metrics.py::TestMetricsCLI::test_wrong_type_add",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_dot",
"tests/func/test_import_url.py::TestCmdImport::test_unsupported",
"tests/func/test_repro.py::TestReproPipelines::test_cli",
"tests/func/test_import_url.py::TestDefaultOutput::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_dot_commands",
"tests/func/test_import_url.py::TestShouldRemoveOutsBeforeImport::test",
"tests/func/test_repro.py::TestReproLocked::test",
"tests/func/test_repro.py::TestReproMetricsAddUnchanged::test",
"tests/func/test_metrics.py::TestMetricsCLI::test_wrong_type_modify",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_dot_outs",
"tests/func/test_repro.py::TestReproLocked::test_non_existing",
"tests/func/test_repro.py::TestReproLockedCallback::test",
"tests/func/test_pipeline.py::TestPipelineShowDeep::test_outs",
"tests/func/test_metrics.py::TestMetricsCLI::test_wrong_type_show",
"tests/func/test_repro.py::TestReproPhony::test",
"tests/func/test_repro.py::TestNonExistingOutput::test",
"tests/func/test_pipeline.py::TestPipelineListEmpty::test",
"tests/func/test_repro.py::TestReproLockedUnchanged::test",
"tests/func/test_metrics.py::TestNoMetrics::test",
"tests/func/test_repro.py::TestReproDataSource::test",
"tests/func/test_repro.py::TestReproChangedDir::test",
"tests/func/test_metrics.py::TestNoMetrics::test_cli",
"tests/func/test_repro.py::TestReproAlreadyCached::test_force_with_dependencies",
"tests/func/test_pipeline.py::TestPipelineListSingle::test",
"tests/func/test_metrics.py::TestCachedMetrics::test_add",
"tests/func/test_repro.py::TestShouldDisplayMetricsOnReproWithMetricsOption::test",
"tests/func/test_repro.py::TestReproChangedDirData::test",
"tests/func/test_metrics.py::TestCachedMetrics::test_run",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_ascii",
"tests/func/test_repro.py::TestReproMissingMd5InStageFile::test",
"tests/func/test_metrics.py::TestMetricsType::test_show",
"tests/func/test_repro.py::TestCmdRepro::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_ascii_commands",
"tests/func/test_repro.py::TestCmdReproChdirCwdBackwardCompatible::test",
"tests/func/test_repro.py::TestCmdReproChdir::test",
"tests/func/test_repro.py::TestReproExternalBase::test",
"tests/func/test_repro.py::TestReproExternalS3::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_ascii_outs",
"tests/func/test_move.py::TestMove::test",
"tests/func/test_repro.py::TestReproExternalGS::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_commands",
"tests/func/test_move.py::TestMoveNonExistentFile::test",
"tests/func/test_repro.py::TestReproExternalHDFS::test",
"tests/func/test_repro.py::TestReproDownstream::test",
"tests/func/test_move.py::TestMoveDirectory::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_dot",
"tests/func/test_repro.py::TestReproExternalSSH::test",
"tests/func/test_move.py::TestCmdMove::test",
"tests/func/test_run.py::TestRun::test",
"tests/func/test_run.py::TestRunEmpty::test",
"tests/func/test_move.py::TestMoveNotDataSource::test",
"tests/func/test_run.py::TestRunMissingDep::test",
"tests/func/test_pipeline.py::TestPipelineListSingle::test_dot_commands",
"tests/func/test_repro.py::TestReproExternalLOCAL::test",
"tests/func/test_move.py::TestMoveFileWithExtension::test",
"tests/func/test_run.py::TestRunBadStageFilename::test",
"tests/func/test_run.py::TestCmdRunOverwrite::test",
"tests/func/test_repro.py::TestReproExternalHTTP::test",
"tests/func/test_run.py::TestCmdRunCliMetrics::test_cached",
"tests/func/test_move.py::TestMoveFileToDirectory::test",
"tests/func/test_repro.py::TestReproShell::test",
"tests/func/test_move.py::TestMoveFileToDirectoryWithoutSpecifiedTargetName::test",
"tests/func/test_run.py::TestCmdRunCliMetrics::test_not_cached",
"tests/func/test_run.py::TestCmdRunWorkingDirectory::test_cwd_is_ignored",
"tests/func/test_run.py::TestRunNoExec::test",
"tests/func/test_repro.py::TestReproAllPipelines::test",
"tests/func/test_run.py::TestCmdRunWorkingDirectory::test_default_wdir_is_not_written",
"tests/func/test_move.py::TestMoveDirectoryShouldNotOverwriteExisting::test",
"tests/func/test_run.py::TestRunCircularDependency::test",
"tests/func/test_run.py::TestCmdRunWorkingDirectory::test_fname_changes_path_and_wdir",
"tests/func/test_repro.py::TestReproNoCommit::test",
"tests/func/test_run.py::TestRunCircularDependency::test_graph",
"tests/func/test_move.py::TestMoveFileBetweenDirectories::test",
"tests/func/test_move.py::TestMoveFileInsideDirectory::test",
"tests/func/test_repro.py::TestReproAlreadyCached::test",
"tests/func/test_run.py::TestRunCircularDependency::test_non_normalized_paths",
"tests/func/test_run.py::TestRunCircularDependency::test_outs_no_cache",
"tests/func/test_repro.py::TestReproAlreadyCached::test_force_import",
"tests/func/test_run.py::TestRunDuplicatedArguments::test",
"tests/func/test_run.py::TestRunDuplicatedArguments::test_non_normalized_paths",
"tests/func/test_run.py::TestRunDuplicatedArguments::test_outs_no_cache",
"tests/func/test_tag.py::TestTagAll::test",
"tests/func/test_run.py::TestRunStageInsideOutput::test_cwd",
"tests/func/test_tag.py::TestTagAddNoChecksumInfo::test",
"tests/func/test_tag.py::TestTagRemoveNoTag::test",
"tests/func/test_run.py::TestRunStageInsideOutput::test_file_name",
"tests/func/test_stage.py::TestSchemaCmd::test_cmd_none",
"tests/func/test_run.py::TestRunBadCwd::test",
"tests/func/test_stage.py::TestSchemaCmd::test_cmd_object",
"tests/func/test_stage.py::TestSchemaCmd::test_cmd_str",
"tests/func/test_run.py::TestRunBadCwd::test_same_prefix",
"tests/func/test_run.py::TestRunCommit::test",
"tests/func/test_stage.py::TestSchemaCmd::test_no_cmd",
"tests/func/test_run.py::TestRunBadWdir::test",
"tests/func/test_run.py::TestRunPersistOuts::test",
"tests/func/test_run.py::TestRunBadWdir::test_not_dir",
"tests/func/test_stage.py::TestSchemaDepsOuts::test_empty_list",
"tests/func/test_run.py::TestRunPersistOutsNoCache::test",
"tests/func/test_stage.py::TestSchemaDepsOuts::test_list",
"tests/func/test_run.py::TestRunBadWdir::test_not_found",
"tests/func/test_run.py::TestRunBadWdir::test_same_prefix",
"tests/func/test_stage.py::TestSchemaDepsOuts::test_none",
"tests/func/test_run.py::TestShouldRaiseOnOverlappingOutputPaths::test",
"tests/func/test_run.py::TestRunBadName::test",
"tests/func/test_stage.py::TestSchemaDepsOuts::test_object",
"tests/func/test_run.py::TestRunBadName::test_not_found",
"tests/func/test_run.py::TestNewRunShouldRemoveOutsOnNoPersist::test",
"tests/func/test_run.py::TestRunBadName::test_same_prefix",
"tests/func/test_stage.py::TestReload::test",
"tests/func/test_run.py::TestNewRunShouldNotRemoveOutsOnPersist::test",
"tests/func/test_stage.py::TestDefaultWorkingDirectory::test_ignored_in_checksum",
"tests/func/test_run.py::TestShouldNotCheckoutUponCorruptedLocalHardlinkCache::test",
"tests/func/test_run.py::TestRunRemoveOuts::test",
"tests/func/test_run.py::TestPersistentOutput::test_ignore_build_cache",
"tests/func/test_stage.py::TestExternalRemoteResolution::test_remote_dependency",
"tests/func/test_run.py::TestRunUnprotectOutsCopy::test",
"tests/func/test_stage.py::TestExternalRemoteResolution::test_remote_output",
"tests/func/test_run.py::TestRunUnprotectOutsSymlink::test",
"tests/func/test_run.py::TestRunUnprotectOutsHardlink::test",
"tests/func/test_utils.py::TestUtils::test_boxify",
"tests/func/test_utils.py::TestUtils::test_copyfile",
"tests/func/test_utils.py::TestUtils::test_dict_md5",
"tests/func/test_utils.py::TestUtils::test_file_md5_crlf",
"tests/func/test_unprotect.py::TestUnprotect::test",
"tests/func/test_status.py::TestStatus::test_implied_cloud",
"tests/func/test_status.py::TestStatus::test_quiet",
"tests/func/test_version.py::test_info_outside_of_repo",
"tests/func/test_tag.py::TestTag::test",
"tests/func/test_version.py::test_fs_info_outside_of_repo",
"tests/unit/test_repo.py::TestIsDvcInternal::test_return_false_on_non_dvc_file",
"tests/unit/test_repo.py::TestIsDvcInternal::test_return_true_on_dvc_internal_file",
"tests/unit/dependency/test_gs.py::TestDependencyGS::test_save_missing",
"tests/unit/dependency/test_hdfs.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/dependency/test_hdfs.py::TestDependencyHDFS::test_save_missing",
"tests/unit/dependency/test_http.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/dependency/test_http.py::TestDependencyHTTP::test_save_missing",
"tests/unit/dependency/test_local.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/output/test_hdfs.py::TestOutputLOCAL::test_save_missing",
"tests/unit/output/test_hdfs.py::TestOutputHDFS::test_save_missing",
"tests/unit/output/test_s3.py::TestOutputS3::test_save_missing",
"tests/unit/dependency/test_s3.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/dependency/test_s3.py::TestDependencyS3::test_save_missing",
"tests/unit/output/test_local.py::TestOutputLOCAL::test_save_missing",
"tests/unit/dependency/test_ssh.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/output/test_ssh.py::TestOutputLOCAL::test_save_missing",
"tests/unit/output/test_local.py::TestGetFilesNumber::test_return_0_on_no_cache",
"tests/unit/output/test_local.py::TestGetFilesNumber::test_return_1_on_single_file_cache",
"tests/unit/output/test_ssh.py::TestOutputSSH::test_save_missing",
"tests/unit/output/test_local.py::TestGetFilesNumber::test_return_mutiple_for_dir",
"tests/unit/dependency/test_ssh.py::TestDependencySSH::test_save_missing",
"tests/unit/output/test_s3.py::TestOutputLOCAL::test_save_missing",
"tests/unit/command/test_update.py::test_update",
"tests/unit/output/test_gs.py::TestOutputLOCAL::test_save_missing",
"tests/unit/dependency/test_gs.py::TestDependencyLOCAL::test_save_missing",
"tests/unit/output/test_gs.py::TestOutputGS::test_save_missing",
"tests/unit/scm/test_git.py::TestGit::test_belongs_to_scm_false",
"tests/unit/scm/test_git.py::TestGit::test_belongs_to_scm_true_on_gitignore",
"tests/unit/scm/test_git.py::TestGit::test_belongs_to_scm_true_on_git_internal",
"tests/unit/utils/test_http.py::test_open_url"
] | [] | [] | Apache License 2.0 | 5,496 | 1,878 | [
"dvc/analytics.py",
"dvc/lock.py",
"dvc/progress.py",
"dvc/remote/local.py",
"dvc/system.py",
"dvc/updater.py",
"dvc/utils/__init__.py",
"dvc/version.py"
] |
missionpinball__mpf-1424 | c43bf1badfedd961d324af3199ea586438102ca3 | 2019-10-03 11:03:03 | 2c1bb3aa1e25674916bc4e0d17ccb6c3c87bd01b | diff --git a/mpf/config_players/show_player.py b/mpf/config_players/show_player.py
index 1b721778f..b7627fdc1 100644
--- a/mpf/config_players/show_player.py
+++ b/mpf/config_players/show_player.py
@@ -67,7 +67,7 @@ class ShowPlayer(DeviceConfigPlayer):
for key in RESERVED_KEYS:
if key in device_settings["show_tokens"]:
self.raise_config_error("Key {} is not allowed in show_tokens of your show_player because it is also "
- "an option in show_player. Did indent that option too far?".format(key), 1)
+ "an option in show_player. Did you indent that option too far?".format(key), 1)
return device_settings
def handle_subscription_change(self, value, settings, priority, context):
diff --git a/mpf/core/config_processor.py b/mpf/core/config_processor.py
index a6eb8387f..b86daec9a 100755
--- a/mpf/core/config_processor.py
+++ b/mpf/core/config_processor.py
@@ -8,6 +8,7 @@ import pickle # nosec
import tempfile
from typing import List, Tuple, Any, Optional
+from difflib import SequenceMatcher
from mpf.core.file_manager import FileManager
from mpf.core.utility_functions import Util
@@ -121,6 +122,42 @@ class ConfigProcessor:
return config
+ @staticmethod
+ def _find_similar_key(key, config_spec, config_type):
+ """Find the most similar key in config_spec."""
+ best_similarity = 0
+ best_key = ""
+ for existing_key, config in config_spec.items():
+ if '__valid_in__' not in config or config_type not in config['__valid_in__']:
+ continue
+ similarity = SequenceMatcher(None, existing_key, key).ratio()
+ if similarity > best_similarity:
+ best_key = existing_key
+ best_similarity = similarity
+
+ return best_key
+
+ def _check_sections(self, config, config_type, filename, ignore_unknown_sections):
+ config_spec = self.config_validator.get_config_spec()
+ if not isinstance(config, dict):
+ raise ConfigFileError("Config should be a dict: {}".format(config), 1, self.log.name, filename)
+ for k in config.keys():
+ try:
+ if config_type not in config_spec[k]['__valid_in__']:
+ raise ConfigFileError('Found a "{}:" section in config file {}, '
+ 'but that section is not valid in {} config '
+ 'files (only in {}).'.format(k, filename, config_type,
+ config_spec[k]['__valid_in__']),
+ 2, self.log.name, filename)
+ except KeyError:
+ if not ignore_unknown_sections:
+ suggestion = self._find_similar_key(k, config_spec, config_type)
+
+ raise ConfigFileError('Found a "{}:" section in config file {}, '
+ 'but that section name is unknown. Did you mean "{}:" '
+ 'instead?.'.format(k, filename, suggestion), 3, self.log.name,
+ filename)
+
def _load_config_file_and_return_loaded_files(
self, filename, config_type: str,
ignore_unknown_sections=False) -> Tuple[dict, List[str]]: # pragma: no cover
@@ -138,20 +175,7 @@ class ConfigProcessor:
self.log.info('Loading config: %s', filename)
if config_type in ("machine", "mode"):
- if not isinstance(config, dict):
- raise ConfigFileError("Config should be a dict: {}".format(config), self.log.name, "ConfigProcessor")
- for k in config.keys():
- try:
- if config_type not in self.config_validator.get_config_spec()[k][
- '__valid_in__']:
- raise ValueError('Found a "{}:" section in config file {}, '
- 'but that section is not valid in {} config '
- 'files.'.format(k, filename, config_type))
- except KeyError:
- if not ignore_unknown_sections:
- raise ValueError('Found a "{}:" section in config file {}, '
- 'but that section is not valid in {} config '
- 'files.'.format(k, filename, config_type))
+ self._check_sections(config, config_type, filename, ignore_unknown_sections)
try:
if 'config' in config:
@@ -167,43 +191,6 @@ class ConfigProcessor:
except TypeError:
return dict(), []
- def load_config_file(self, filename, config_type: str,
- ignore_unknown_sections=False) -> dict: # pragma: no cover
- """Load a config file."""
- # config_type is str 'machine' or 'mode', which specifies whether this
- # file being loaded is a machine config or a mode config file
- expected_version_str = ConfigProcessor.get_expected_version(config_type)
- config = FileManager.load(filename, expected_version_str, True)
-
- if not config:
- return dict()
-
- for k in config.keys():
- try:
- if config_type not in self.config_validator.get_config_spec()[k][
- '__valid_in__']:
- raise ValueError('Found a "{}:" section in config file {}, '
- 'but that section is not valid in {} config '
- 'files.'.format(k, filename, config_type))
- except KeyError:
- if not ignore_unknown_sections:
- raise ValueError('Found a "{}:" section in config file {}, '
- 'but that section is not valid in {} config '
- 'files.'.format(k, filename, config_type))
-
- try:
- if 'config' in config:
- path = os.path.split(filename)[0]
-
- for file in Util.string_to_list(config['config']):
- full_file = os.path.join(path, file)
- config = Util.dict_merge(config,
- self.load_config_file(
- full_file, config_type))
- return config
- except TypeError:
- return dict()
-
@staticmethod
def get_expected_version(config_type: str) -> str:
"""Return the expected config or show version tag, e.g. #config_version=5."""
| Suggest config section on typos
With a broken config such as:
```
hardware_sound_system:
default:
label: LISY
```
Users get errors such as: `Found a "hardware_sound_system:" section in config file xxx\config\config, but that section is not valid in machine config files.`
Ideally we would recommend `Did you mean "hardware_sound_systems"?`. The config_validator which emits those warnings has all config_spec loaded and we could run a simple string similarity comparison against all of them (see: https://stackoverflow.com/questions/17388213/find-the-similarity-metric-between-two-strings) to figure the best match (or the best matches). We probably should define a similarity threshold to prevent too distinct results. | missionpinball/mpf | diff --git a/mpf/tests/machine_files/config_processor/typo.yaml b/mpf/tests/machine_files/config_processor/typo.yaml
new file mode 100644
index 000000000..d9273ca08
--- /dev/null
+++ b/mpf/tests/machine_files/config_processor/typo.yaml
@@ -0,0 +1,7 @@
+#config_version=5
+
+light:
+ light1:
+ number: 1
+ light2:
+ number: 2
diff --git a/mpf/tests/machine_files/config_processor/working.yaml b/mpf/tests/machine_files/config_processor/working.yaml
new file mode 100644
index 000000000..a31bad088
--- /dev/null
+++ b/mpf/tests/machine_files/config_processor/working.yaml
@@ -0,0 +1,10 @@
+#config_version=5
+
+config:
+ - working_subconfig.yaml
+
+lights:
+ light1:
+ number: 1
+ light2:
+ number: 2
diff --git a/mpf/tests/machine_files/config_processor/working_subconfig.yaml b/mpf/tests/machine_files/config_processor/working_subconfig.yaml
new file mode 100644
index 000000000..1607bca51
--- /dev/null
+++ b/mpf/tests/machine_files/config_processor/working_subconfig.yaml
@@ -0,0 +1,5 @@
+#config_version=5
+
+switches:
+ switch1:
+ number: 1
diff --git a/mpf/tests/test_ConfigProcessor.py b/mpf/tests/test_ConfigProcessor.py
new file mode 100644
index 000000000..73a41293f
--- /dev/null
+++ b/mpf/tests/test_ConfigProcessor.py
@@ -0,0 +1,57 @@
+import os
+import unittest
+
+from mpf._version import log_url
+
+from mpf.core.config_processor import ConfigProcessor
+
+from mpf.core.config_validator import ConfigValidator
+
+from mpf.core.placeholder_manager import PlaceholderManager
+from mpf.exceptions.config_file_error import ConfigFileError
+
+
+class FakeMachine:
+
+ def __init__(self):
+ self.machine_config = {
+ "logging": {"console": {"placeholder_manager": "basic"}, "file": {"placeholder_manager": "basic"}}
+ }
+ self.options = {"production": False}
+ self.placeholder_manager = PlaceholderManager(self)
+
+
+class TestConfigProcessor(unittest.TestCase):
+
+ def setUp(self):
+ self.machine = FakeMachine()
+ self.config_validator = ConfigValidator(self, False, False)
+ self.config_processor = ConfigProcessor(self.config_validator)
+ self.maxDiff = None
+
+ def test_load_with_subconfig(self):
+ """Test successful load with subconfig."""
+ config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
+ "machine_files/config_processor/working.yaml")
+ config = self.config_processor.load_config_files_with_cache([config_file], "machine")
+
+ self.assertEqual(
+ {'config': ['working_subconfig.yaml'], 'lights': {'light1': {'number': 1}, 'light2': {'number': 2}},
+ 'switches': {'switch1': {'number': 1}}},
+ config
+ )
+
+ def test_typo(self):
+ """Test suggestion on typo."""
+ config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
+ "machine_files/config_processor/typo.yaml")
+ with self.assertRaises(ConfigFileError) as e:
+ config = self.config_processor.load_config_files_with_cache([config_file], "machine")
+
+ self.assertEqual('Config File Error in ConfigProcessor: Found a "light:" section in config '
+ 'file {config_file}, '
+ 'but that section name is unknown. Did you mean "lights:" instead?. Context: '
+ '{config_file} '
+ 'Error Code: CFE-ConfigProcessor-3 ({})'.format(log_url.format("CFE-ConfigProcessor-3"),
+ config_file=config_file),
+ str(e.exception))
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 2
} | 0.33 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | asciimatics==1.14.0
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
future==1.0.0
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
-e git+https://github.com/missionpinball/mpf.git@c43bf1badfedd961d324af3199ea586438102ca3#egg=mpf
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==8.4.0
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyfiglet==0.8.post1
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pyserial==3.5
pyserial-asyncio==0.6
pytest==6.2.4
ruamel.yaml==0.15.37
sortedcontainers==2.4.0
terminaltables==3.1.10
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing==3.7.4.3
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
wcwidth==0.2.13
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: mpf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- asciimatics==1.14.0
- future==1.0.0
- pillow==8.4.0
- psutil==7.0.0
- pyfiglet==0.8.post1
- pyserial==3.5
- pyserial-asyncio==0.6
- ruamel-yaml==0.15.37
- sortedcontainers==2.4.0
- terminaltables==3.1.10
- typing==3.7.4.3
- wcwidth==0.2.13
prefix: /opt/conda/envs/mpf
| [
"mpf/tests/test_ConfigProcessor.py::TestConfigProcessor::test_typo"
] | [] | [
"mpf/tests/test_ConfigProcessor.py::TestConfigProcessor::test_load_with_subconfig"
] | [] | MIT License | 5,516 | 1,405 | [
"mpf/config_players/show_player.py",
"mpf/core/config_processor.py"
] |
|
pybamm-team__PyBaMM-645 | 8afc8f94f75b6f0b51928f43819f9894d2b91020 | 2019-10-03 12:16:08 | 4bf599043c9bd070333e224180abc455c8127940 | diff --git a/pybamm/models/full_battery_models/base_battery_model.py b/pybamm/models/full_battery_models/base_battery_model.py
index 1db1276c5..c301255a7 100644
--- a/pybamm/models/full_battery_models/base_battery_model.py
+++ b/pybamm/models/full_battery_models/base_battery_model.py
@@ -128,9 +128,9 @@ class BaseBatteryModel(pybamm.BaseModel):
def default_var_pts(self):
var = pybamm.standard_spatial_vars
return {
- var.x_n: 40,
- var.x_s: 25,
- var.x_p: 35,
+ var.x_n: 20,
+ var.x_s: 20,
+ var.x_p: 20,
var.r_n: 10,
var.r_p: 10,
var.y: 10,
diff --git a/pybamm/models/full_battery_models/lead_acid/base_lead_acid_model.py b/pybamm/models/full_battery_models/lead_acid/base_lead_acid_model.py
index 9f223ae9a..cf753ddb4 100644
--- a/pybamm/models/full_battery_models/lead_acid/base_lead_acid_model.py
+++ b/pybamm/models/full_battery_models/lead_acid/base_lead_acid_model.py
@@ -60,6 +60,11 @@ class BaseModel(pybamm.BaseBatteryModel):
elif self.options["dimensionality"] == 2:
return pybamm.Geometry("2+1D macro")
+ @property
+ def default_var_pts(self):
+ var = pybamm.standard_spatial_vars
+ return {var.x_n: 30, var.x_s: 30, var.x_p: 30, var.y: 10, var.z: 10}
+
def set_standard_output_variables(self):
super().set_standard_output_variables()
# Current
diff --git a/pybamm/solvers/base_solver.py b/pybamm/solvers/base_solver.py
index 63dd82ad6..ad1ad2b4a 100644
--- a/pybamm/solvers/base_solver.py
+++ b/pybamm/solvers/base_solver.py
@@ -10,13 +10,16 @@ class BaseSolver(object):
Parameters
----------
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8).
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
"""
- def __init__(self, method=None, tol=1e-8):
+ def __init__(self, method=None, rtol=1e-6, atol=1e-6):
self._method = method
- self._tol = tol
+ self._rtol = rtol
+ self._atol = atol
@property
def method(self):
@@ -27,12 +30,20 @@ class BaseSolver(object):
self._method = value
@property
- def tol(self):
- return self._tol
+ def rtol(self):
+ return self._rtol
- @tol.setter
- def tol(self, value):
- self._tol = value
+ @rtol.setter
+ def rtol(self, value):
+ self._rtol = value
+
+ @property
+ def atol(self):
+ return self._atol
+
+ @atol.setter
+ def atol(self, value):
+ self._atol = value
def solve(self, model, t_eval):
"""
@@ -73,7 +84,7 @@ class BaseSolver(object):
solution.total_time = timer.time() - start_time
solution.set_up_time = set_up_time
- pybamm.logger.warning("Finish solving {} ({})".format(model.name, termination))
+ pybamm.logger.info("Finish solving {} ({})".format(model.name, termination))
pybamm.logger.info(
"Set-up time: {}, Solve time: {}, Total time: {}".format(
timer.format(solution.set_up_time),
diff --git a/pybamm/solvers/dae_solver.py b/pybamm/solvers/dae_solver.py
index 035a31f05..003403254 100644
--- a/pybamm/solvers/dae_solver.py
+++ b/pybamm/solvers/dae_solver.py
@@ -12,11 +12,13 @@ class DaeSolver(pybamm.BaseSolver):
Parameters
----------
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8).
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
root_method : str, optional
The method to use to find initial conditions (default is "lm")
- tolerance : float, optional
+ root_tol : float, optional
The tolerance for the initial-condition solver (default is 1e-8).
max_steps: int, optional
The maximum number of steps the solver will take before terminating
@@ -24,9 +26,15 @@ class DaeSolver(pybamm.BaseSolver):
"""
def __init__(
- self, method=None, tol=1e-8, root_method="lm", root_tol=1e-6, max_steps=1000
+ self,
+ method=None,
+ rtol=1e-6,
+ atol=1e-6,
+ root_method="lm",
+ root_tol=1e-6,
+ max_steps=1000,
):
- super().__init__(method, tol)
+ super().__init__(method, rtol, atol)
self.root_method = root_method
self.root_tol = root_tol
self.max_steps = max_steps
diff --git a/pybamm/solvers/ode_solver.py b/pybamm/solvers/ode_solver.py
index 7d16dcb99..db279b257 100644
--- a/pybamm/solvers/ode_solver.py
+++ b/pybamm/solvers/ode_solver.py
@@ -10,12 +10,14 @@ class OdeSolver(pybamm.BaseSolver):
Parameters
----------
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8).
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
"""
- def __init__(self, method=None, tol=1e-8):
- super().__init__(method, tol)
+ def __init__(self, method=None, rtol=1e-6, atol=1e-6):
+ super().__init__(method, rtol, atol)
def compute_solution(self, model, t_eval):
"""Calculate the solution of the model at specified times.
diff --git a/pybamm/solvers/scikits_dae_solver.py b/pybamm/solvers/scikits_dae_solver.py
index 0ed22a103..ee99d4602 100644
--- a/pybamm/solvers/scikits_dae_solver.py
+++ b/pybamm/solvers/scikits_dae_solver.py
@@ -22,12 +22,13 @@ class ScikitsDaeSolver(pybamm.DaeSolver):
----------
method : str, optional
The method to use in solve_ivp (default is "BDF")
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8). Set as the both reltol and
- abstol in solve_ivp.
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
root_method : str, optional
The method to use to find initial conditions (default is "lm")
- tolerance : float, optional
+ root_tol : float, optional
The tolerance for the initial-condition solver (default is 1e-8).
max_steps: int, optional
The maximum number of steps the solver will take before terminating
@@ -35,12 +36,18 @@ class ScikitsDaeSolver(pybamm.DaeSolver):
"""
def __init__(
- self, method="ida", tol=1e-8, root_method="lm", root_tol=1e-6, max_steps=1000
+ self,
+ method="ida",
+ rtol=1e-6,
+ atol=1e-6,
+ root_method="lm",
+ root_tol=1e-6,
+ max_steps=1000,
):
if scikits_odes_spec is None:
raise ImportError("scikits.odes is not installed")
- super().__init__(method, tol, root_method, root_tol, max_steps)
+ super().__init__(method, rtol, atol, root_method, root_tol, max_steps)
def integrate(
self, residuals, y0, t_eval, events=None, mass_matrix=None, jacobian=None
@@ -76,8 +83,8 @@ class ScikitsDaeSolver(pybamm.DaeSolver):
extra_options = {
"old_api": False,
- "rtol": self.tol,
- "atol": self.tol,
+ "rtol": self.rtol,
+ "atol": self.atol,
"max_steps": self.max_steps,
}
@@ -120,7 +127,7 @@ class ScikitsDaeSolver(pybamm.DaeSolver):
np.transpose(sol.values.y),
sol.roots.t,
np.transpose(sol.roots.y),
- termination
+ termination,
)
else:
raise pybamm.SolverError(sol.message)
diff --git a/pybamm/solvers/scikits_ode_solver.py b/pybamm/solvers/scikits_ode_solver.py
index 17cfff62a..542b15d19 100644
--- a/pybamm/solvers/scikits_ode_solver.py
+++ b/pybamm/solvers/scikits_ode_solver.py
@@ -26,18 +26,19 @@ class ScikitsOdeSolver(pybamm.OdeSolver):
----------
method : str, optional
The method to use in solve_ivp (default is "BDF")
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8). Set as the both reltol and
- abstol in solve_ivp.
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
linsolver : str, optional
Can be 'dense' (= default), 'lapackdense', 'spgmr', 'spbcgs', 'sptfqmr'
"""
- def __init__(self, method="cvode", tol=1e-8, linsolver="dense"):
+ def __init__(self, method="cvode", rtol=1e-6, atol=1e-6, linsolver="dense"):
if scikits_odes_spec is None:
raise ImportError("scikits.odes is not installed")
- super().__init__(method, tol)
+ super().__init__(method, rtol, atol)
self.linsolver = linsolver
def integrate(
@@ -98,8 +99,8 @@ class ScikitsOdeSolver(pybamm.OdeSolver):
extra_options = {
"old_api": False,
- "rtol": self.tol,
- "atol": self.tol,
+ "rtol": self.rtol,
+ "atol": self.atol,
"linsolver": self.linsolver,
}
@@ -134,7 +135,7 @@ class ScikitsOdeSolver(pybamm.OdeSolver):
np.transpose(sol.values.y),
sol.roots.t,
np.transpose(sol.roots.y),
- termination
+ termination,
)
else:
raise pybamm.SolverError(sol.message)
diff --git a/pybamm/solvers/scipy_solver.py b/pybamm/solvers/scipy_solver.py
index 60e3db9ac..fd7afbf49 100644
--- a/pybamm/solvers/scipy_solver.py
+++ b/pybamm/solvers/scipy_solver.py
@@ -14,13 +14,14 @@ class ScipySolver(pybamm.OdeSolver):
----------
method : str, optional
The method to use in solve_ivp (default is "BDF")
- tolerance : float, optional
- The tolerance for the solver (default is 1e-8). Set as the both reltol and
- abstol in solve_ivp.
+ rtol : float, optional
+ The relative tolerance for the solver (default is 1e-6).
+ atol : float, optional
+ The absolute tolerance for the solver (default is 1e-6).
"""
- def __init__(self, method="BDF", tol=1e-8):
- super().__init__(method, tol)
+ def __init__(self, method="BDF", rtol=1e-6, atol=1e-6):
+ super().__init__(method, rtol, atol)
def integrate(
self, derivs, y0, t_eval, events=None, mass_matrix=None, jacobian=None
@@ -52,7 +53,7 @@ class ScipySolver(pybamm.OdeSolver):
various diagnostic messages.
"""
- extra_options = {"rtol": self.tol, "atol": self.tol}
+ extra_options = {"rtol": self.rtol, "atol": self.atol}
# check for user-supplied Jacobian
implicit_methods = ["Radau", "BDF", "LSODA"]
@@ -90,12 +91,6 @@ class ScipySolver(pybamm.OdeSolver):
termination = "final time"
t_event = None
y_event = np.array(None)
- return pybamm.Solution(
- sol.t,
- sol.y,
- t_event,
- y_event,
- termination
- )
+ return pybamm.Solution(sol.t, sol.y, t_event, y_event, termination)
else:
raise pybamm.SolverError(sol.message)
diff --git a/results/change_settings/change_solver_tolerances.py b/results/change_settings/change_solver_tolerances.py
new file mode 100644
index 000000000..81261e796
--- /dev/null
+++ b/results/change_settings/change_solver_tolerances.py
@@ -0,0 +1,52 @@
+#
+# Compare solution of li-ion battery models when changing solver tolerances
+#
+import numpy as np
+import pybamm
+
+pybamm.set_logging_level("INFO")
+
+# load model
+model = pybamm.lithium_ion.DFN()
+
+
+# process and discretise
+param = model.default_parameter_values
+param.process_model(model)
+geometry = model.default_geometry
+param.process_geometry(geometry)
+mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
+disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
+disc.process_model(model)
+
+# tolerances (rtol, atol)
+tols = [[1e-8, 1e-8], [1e-6, 1e-6], [1e-3, 1e-6], [1e-3, 1e-3]]
+
+# solve model
+solutions = [None] * len(tols)
+voltages = [None] * len(tols)
+voltage_rmse = [None] * len(tols)
+labels = [None] * len(tols)
+t_eval = np.linspace(0, 0.17, 100)
+for i, tol in enumerate(tols):
+ solver = pybamm.ScikitsDaeSolver(rtol=tol[0], atol=tol[1])
+ solutions[i] = solver.solve(model, t_eval)
+ voltages[i] = pybamm.ProcessedVariable(
+ model.variables["Terminal voltage [V]"],
+ solutions[i].t,
+ solutions[i].y,
+ mesh=mesh,
+ )(solutions[i].t)
+ voltage_rmse[i] = pybamm.rmse(voltages[0], voltages[i])
+ labels[i] = "rtol = {}, atol = {}".format(tol[0], tol[1])
+
+# print RMSE voltage errors vs tighest tolerance
+for i, tol in enumerate(tols):
+ print(
+ "rtol = {}, atol = {}, solve time = {} s, Voltage RMSE = {}".format(
+ tol[0], tol[1], solutions[i].solve_time, voltage_rmse[i]
+ )
+ )
+# plot
+plot = pybamm.QuickPlot([model] * len(solutions), mesh, solutions, labels=labels)
+plot.dynamic_plot()
diff --git a/results/change_settings/compare_var_pts.py b/results/change_settings/compare_var_pts.py
new file mode 100644
index 000000000..2153a96f6
--- /dev/null
+++ b/results/change_settings/compare_var_pts.py
@@ -0,0 +1,68 @@
+#
+# Compare solution of li-ion battery models when varying the number of grid points
+#
+import numpy as np
+import pybamm
+import matplotlib.pyplot as plt
+
+pybamm.set_logging_level("INFO")
+
+# choose number of points per domain (all domains will have same npts)
+Npts = [30, 20, 10, 5]
+
+# create models
+models = [None] * len(Npts)
+for i, npts in enumerate(Npts):
+ models[i] = pybamm.lithium_ion.DFN(name="Npts = {}".format(npts))
+
+# load parameter values and process models and geometry
+param = models[0].default_parameter_values
+for model in models:
+ param.process_model(model)
+
+# set mesh
+meshes = [None] * len(models)
+
+# create geometry and discretise models
+var = pybamm.standard_spatial_vars
+for i, model in enumerate(models):
+ geometry = model.default_geometry
+ param.process_geometry(geometry)
+ var_pts = {
+ var.x_n: Npts[i],
+ var.x_s: Npts[i],
+ var.x_p: Npts[i],
+ var.r_n: Npts[i],
+ var.r_p: Npts[i],
+ }
+ meshes[i] = pybamm.Mesh(geometry, models[-1].default_submesh_types, var_pts)
+ disc = pybamm.Discretisation(meshes[i], model.default_spatial_methods)
+ disc.process_model(model)
+
+# solve model and plot voltage
+solutions = [None] * len(models)
+voltages = [None] * len(models)
+voltage_rmse = [None] * len(models)
+t_eval = np.linspace(0, 0.17, 100)
+for i, model in enumerate(models):
+ solutions[i] = model.default_solver.solve(model, t_eval)
+ voltages[i] = pybamm.ProcessedVariable(
+ model.variables["Terminal voltage [V]"],
+ solutions[i].t,
+ solutions[i].y,
+ mesh=meshes[i],
+ )(solutions[i].t)
+ voltage_rmse[i] = pybamm.rmse(voltages[0], voltages[i])
+ plt.plot(solutions[i].t, voltages[i], label=model.name)
+
+for i, npts in enumerate(Npts):
+ print(
+ "npts = {}, solve time = {} s, Voltage RMSE = {}".format(
+ npts, solutions[i].solve_time, voltage_rmse[i]
+ )
+ )
+
+plt.xlabel(r"$t$")
+plt.ylabel("Voltage [V]")
+plt.legend()
+plt.show()
| set relative and absolute tolerances separately for solver
At the moment we use the same tol for both. Would be good to have additional control over these. Also, the defaults are pretty tight so we might relax them a bit to get a good balance of speed and accuracy. | pybamm-team/PyBaMM | diff --git a/tests/integration/test_models/standard_model_tests.py b/tests/integration/test_models/standard_model_tests.py
index 09fcd23a2..2b5afa124 100644
--- a/tests/integration/test_models/standard_model_tests.py
+++ b/tests/integration/test_models/standard_model_tests.py
@@ -71,6 +71,10 @@ class StandardModelTest(object):
# Overwrite solver if given
if solver is not None:
self.solver = solver
+ # Use tighter default tolerances for testing
+ self.solver.rtol = 1e-8
+ self.solver.atol = 1e-8
+
if t_eval is None:
t_eval = np.linspace(0, 1, 100)
diff --git a/tests/integration/test_models/test_full_battery_models/test_lead_acid/test_asymptotics_convergence.py b/tests/integration/test_models/test_full_battery_models/test_lead_acid/test_asymptotics_convergence.py
index 4d2405fac..f7158abd8 100644
--- a/tests/integration/test_models/test_full_battery_models/test_lead_acid/test_asymptotics_convergence.py
+++ b/tests/integration/test_models/test_full_battery_models/test_lead_acid/test_asymptotics_convergence.py
@@ -47,13 +47,19 @@ class TestAsymptoticConvergence(unittest.TestCase):
param.update_model(leading_order_model, loqs_disc)
param.update_model(composite_model, comp_disc)
param.update_model(full_model, full_disc)
- # Solve, make sure times are the same
+ # Solve, make sure times are the same and use tight tolerances
t_eval = np.linspace(0, 0.6)
solver_loqs = leading_order_model.default_solver
+ solver_loqs.rtol = 1e-8
+ solver_loqs.atol = 1e-8
solution_loqs = solver_loqs.solve(leading_order_model, t_eval)
solver_comp = composite_model.default_solver
+ solver_comp.rtol = 1e-8
+ solver_comp.atol = 1e-8
solution_comp = solver_comp.solve(composite_model, t_eval)
solver_full = full_model.default_solver
+ solver_full.rtol = 1e-8
+ solver_full.atol = 1e-8
solution_full = solver_full.solve(full_model, t_eval)
# Post-process variables
diff --git a/tests/unit/test_solvers/test_base_solver.py b/tests/unit/test_solvers/test_base_solver.py
index 000b01c6b..9d0075e3e 100644
--- a/tests/unit/test_solvers/test_base_solver.py
+++ b/tests/unit/test_solvers/test_base_solver.py
@@ -8,11 +8,14 @@ import unittest
class TestBaseSolver(unittest.TestCase):
def test_base_solver_init(self):
- solver = pybamm.BaseSolver(tol=1e-4)
- self.assertEqual(solver.tol, 1e-4)
-
- solver.tol = 1e-5
- self.assertEqual(solver.tol, 1e-5)
+ solver = pybamm.BaseSolver(rtol=1e-2, atol=1e-4)
+ self.assertEqual(solver.rtol, 1e-2)
+ self.assertEqual(solver.atol, 1e-4)
+
+ solver.rtol = 1e-5
+ self.assertEqual(solver.rtol, 1e-5)
+ solver.rtol = 1e-7
+ self.assertEqual(solver.rtol, 1e-7)
with self.assertRaises(NotImplementedError):
solver.compute_solution(None, None)
diff --git a/tests/unit/test_solvers/test_scikits_solvers.py b/tests/unit/test_solvers/test_scikits_solvers.py
index 597ecf12e..36b3cb21f 100644
--- a/tests/unit/test_solvers/test_scikits_solvers.py
+++ b/tests/unit/test_solvers/test_scikits_solvers.py
@@ -13,7 +13,7 @@ from tests import get_mesh_for_testing, get_discretisation_for_testing
class TestScikitsSolvers(unittest.TestCase):
def test_ode_integrate(self):
# Constant
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def constant_growth(t, y):
return 0.5 * np.ones_like(y)
@@ -25,7 +25,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.testing.assert_allclose(0.5 * solution.t, solution.y[0])
# Exponential decay
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def exponential_decay(t, y):
return -0.1 * y
@@ -55,7 +55,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_ode_integrate_with_event(self):
# Constant
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def constant_decay(t, y):
return -2 * np.ones_like(y)
@@ -75,7 +75,7 @@ class TestScikitsSolvers(unittest.TestCase):
self.assertEqual(solution.termination, "event")
# Expnonential growth
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def exponential_growth(t, y):
return y
@@ -101,7 +101,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_ode_integrate_with_jacobian(self):
# Linear
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def linear_ode(t, y):
return np.array([0.5, 2 - y[0]])
@@ -134,7 +134,7 @@ class TestScikitsSolvers(unittest.TestCase):
)
np.testing.assert_allclose(0.5 * solution.t, solution.y[0])
- solver = pybamm.ScikitsOdeSolver(tol=1e-8, linsolver="spgmr")
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8, linsolver="spgmr")
solution = solver.integrate(linear_ode, y0, t_eval, jacobian=jacobian)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -152,7 +152,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.testing.assert_allclose(0.5 * solution.t, solution.y[0])
# Nonlinear exponential grwoth
- solver = pybamm.ScikitsOdeSolver(tol=1e-8)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8)
def exponential_growth(t, y):
return np.array([y[0], (1.0 - y[0]) * y[1]])
@@ -182,7 +182,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.exp(1 + solution.t - np.exp(solution.t)), solution.y[1], rtol=1e-4
)
- solver = pybamm.ScikitsOdeSolver(tol=1e-8, linsolver="spgmr")
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-8, atol=1e-8, linsolver="spgmr")
solution = solver.integrate(exponential_growth, y0, t_eval, jacobian=jacobian)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -202,7 +202,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_dae_integrate(self):
# Constant
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return [0.5 * np.ones_like(y[0]) - ydot[0], 2 * y[0] - y[1]]
@@ -215,7 +215,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.testing.assert_allclose(1.0 * solution.t, solution.y[1])
# Exponential decay
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def exponential_decay_dae(t, y, ydot):
return [-0.1 * y[0] - ydot[0], 2 * y[0] - y[1]]
@@ -228,7 +228,7 @@ class TestScikitsSolvers(unittest.TestCase):
self.assertEqual(solution.termination, "final time")
def test_dae_integrate_failure(self):
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return [0.5 * np.ones_like(y[0]) - ydot[0], 2 * y[0] - y[1]]
@@ -240,7 +240,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_dae_integrate_bad_ics(self):
# Constant
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return [0.5 * np.ones_like(y[0]) - ydot[0], 2 * y[0] - y[1]]
@@ -265,7 +265,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.testing.assert_allclose(1.0 * solution.t, solution.y[1])
# Exponential decay
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def exponential_decay_dae(t, y, ydot):
return [-0.1 * y[0] - ydot[0], 2 * y[0] - y[1]]
@@ -278,7 +278,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_dae_integrate_with_event(self):
# Constant
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return [0.5 * np.ones_like(y[0]) - ydot[0], 2 * y[0] - y[1]]
@@ -305,7 +305,7 @@ class TestScikitsSolvers(unittest.TestCase):
self.assertEqual(solution.termination, "event")
# Exponential decay
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def exponential_decay_dae(t, y, ydot):
return np.array([-0.1 * y[0] - ydot[0], 2 * y[0] - y[1]])
@@ -334,7 +334,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_dae_integrate_with_jacobian(self):
# Constant
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return np.array([0.5 * np.ones_like(y[0]) - ydot[0], 2.0 * y[0] - y[1]])
@@ -354,7 +354,7 @@ class TestScikitsSolvers(unittest.TestCase):
np.testing.assert_allclose(1.0 * solution.t, solution.y[1])
# Nonlinear (tests when Jacobian a function of y)
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def nonlinear_dae(t, y, ydot):
return np.array([0.5 * np.ones_like(y[0]) - ydot[0], 2 * y[0] ** 2 - y[1]])
@@ -375,7 +375,7 @@ class TestScikitsSolvers(unittest.TestCase):
def test_dae_integrate_with_non_unity_mass(self):
# Constant
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
def constant_growth_dae(t, y, ydot):
return np.array([0.5 * np.ones_like(y[0]) - 4 * ydot[0], 2.0 * y[0] - y[1]])
@@ -405,7 +405,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsOdeSolver(tol=1e-9)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-9, atol=1e-9)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -431,7 +431,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsOdeSolver(tol=1e-9)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-9, atol=1e-9)
t_eval = np.linspace(0, 10, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_allclose(solution.y[0], np.exp(0.1 * solution.t))
@@ -474,7 +474,7 @@ class TestScikitsSolvers(unittest.TestCase):
model.jacobian = jacobian
# Solve
- solver = pybamm.ScikitsOdeSolver(tol=1e-9)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-9, atol=1e-9)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -503,7 +503,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -528,7 +528,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -552,7 +552,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
t_eval = np.linspace(0, 5, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_less(solution.y[0], 1.5)
@@ -591,7 +591,7 @@ class TestScikitsSolvers(unittest.TestCase):
model.jacobian = jacobian
# Solve
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -608,7 +608,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc.process_model(model)
# Solve
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -624,7 +624,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc = get_discretisation_for_testing()
disc.process_model(model)
- solver = pybamm.ScikitsOdeSolver(tol=1e-9)
+ solver = pybamm.ScikitsOdeSolver(rtol=1e-9, atol=1e-9)
# Step once
dt = 0.1
@@ -656,7 +656,7 @@ class TestScikitsSolvers(unittest.TestCase):
disc = get_discretisation_for_testing()
disc.process_model(model)
- solver = pybamm.ScikitsDaeSolver(tol=1e-8)
+ solver = pybamm.ScikitsDaeSolver(rtol=1e-8, atol=1e-8)
# Step once
dt = 0.1
diff --git a/tests/unit/test_solvers/test_scipy_solver.py b/tests/unit/test_solvers/test_scipy_solver.py
index cadc2d12b..cef29c65e 100644
--- a/tests/unit/test_solvers/test_scipy_solver.py
+++ b/tests/unit/test_solvers/test_scipy_solver.py
@@ -11,7 +11,7 @@ import warnings
class TestScipySolver(unittest.TestCase):
def test_integrate(self):
# Constant
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
def constant_growth(t, y):
return 0.5 * np.ones_like(y)
@@ -23,7 +23,7 @@ class TestScipySolver(unittest.TestCase):
np.testing.assert_allclose(0.5 * solution.t, solution.y[0])
# Exponential decay
- solver = pybamm.ScipySolver(tol=1e-8, method="BDF")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="BDF")
def exponential_decay(t, y):
return -0.1 * y
@@ -43,7 +43,7 @@ class TestScipySolver(unittest.TestCase):
y0 = np.array([1])
t_eval = np.linspace(0, 3, 100)
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
# Expect solver to fail when y goes negative
with self.assertRaises(pybamm.SolverError):
solver.integrate(sqrt_decay, y0, t_eval)
@@ -53,7 +53,7 @@ class TestScipySolver(unittest.TestCase):
def test_integrate_with_event(self):
# Constant
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
def constant_growth(t, y):
return 0.5 * np.ones_like(y)
@@ -71,7 +71,7 @@ class TestScipySolver(unittest.TestCase):
self.assertEqual(solution.termination, "event")
# Exponential decay
- solver = pybamm.ScipySolver(tol=1e-8, method="BDF")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="BDF")
def exponential_growth(t, y):
return y
@@ -97,7 +97,7 @@ class TestScipySolver(unittest.TestCase):
def test_ode_integrate_with_jacobian(self):
# Linear
- solver = pybamm.ScipySolver(tol=1e-8, method="BDF")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="BDF")
def linear_ode(t, y):
return np.array([0.5 * np.ones_like(y[0]), 2.0 - y[0]])
@@ -115,7 +115,7 @@ class TestScipySolver(unittest.TestCase):
)
# Nonlinear exponential grwoth
- solver = pybamm.ScipySolver(tol=1e-8, method="BDF")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="BDF")
def exponential_growth(t, y):
return np.array([y[0], (1.0 - y[0]) * y[1]])
@@ -147,7 +147,7 @@ class TestScipySolver(unittest.TestCase):
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model)
# Solve
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -174,7 +174,7 @@ class TestScipySolver(unittest.TestCase):
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model)
# Solve
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
t_eval = np.linspace(0, 10, 100)
solution = solver.solve(model, t_eval)
self.assertLess(len(solution.t), len(t_eval))
@@ -219,7 +219,7 @@ class TestScipySolver(unittest.TestCase):
model.jacobian = jacobian
# Solve
- solver = pybamm.ScipySolver(tol=1e-9)
+ solver = pybamm.ScipySolver(rtol=1e-9, atol=1e-9)
t_eval = np.linspace(0, 1, 100)
solution = solver.solve(model, t_eval)
np.testing.assert_array_equal(solution.t, t_eval)
@@ -249,7 +249,7 @@ class TestScipySolver(unittest.TestCase):
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model)
- solver = pybamm.ScipySolver(tol=1e-8, method="RK45")
+ solver = pybamm.ScipySolver(rtol=1e-8, atol=1e-8, method="RK45")
# Step once
dt = 0.1
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_added_files",
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 8
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev,docs]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc gfortran libopenblas-dev"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.16
anyio==4.9.0
anytree==2.12.1
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
async-lru==2.0.5
attrs==25.3.0
autograd==1.7.0
babel==2.17.0
beautifulsoup4==4.13.3
black==25.1.0
bleach==6.2.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
click==8.1.8
comm==0.2.2
contourpy==1.3.0
cycler==0.12.1
debugpy==1.8.13
decorator==5.2.1
defusedxml==0.7.1
docutils==0.21.2
exceptiongroup==1.2.2
executing==2.2.0
fastjsonschema==2.21.1
flake8==7.2.0
fonttools==4.56.0
fqdn==1.5.1
guzzle_sphinx_theme==0.7.11
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
imagesize==1.4.1
importlib_metadata==8.6.1
importlib_resources==6.5.2
iniconfig==2.1.0
ipykernel==6.29.5
ipython==8.18.1
ipywidgets==8.1.5
isoduration==20.11.0
jedi==0.19.2
Jinja2==3.1.6
json5==0.10.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter==1.1.1
jupyter-console==6.6.3
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.6
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_widgets==3.0.13
kiwisolver==1.4.7
MarkupSafe==3.0.2
matplotlib==3.9.4
matplotlib-inline==0.1.7
mccabe==0.7.0
mistune==3.1.3
mypy-extensions==1.0.0
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
notebook==7.3.3
notebook_shim==0.2.4
numpy==2.0.2
overrides==7.7.0
packaging==24.2
pandas==2.2.3
pandocfilters==1.5.1
parso==0.8.4
pathspec==0.12.1
pexpect==4.9.0
pillow==11.1.0
platformdirs==4.3.7
pluggy==1.5.0
prometheus_client==0.21.1
prompt_toolkit==3.0.50
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
-e git+https://github.com/pybamm-team/PyBaMM.git@8afc8f94f75b6f0b51928f43819f9894d2b91020#egg=pybamm
pycodestyle==2.13.0
pycparser==2.22
pyflakes==3.3.1
Pygments==2.19.1
pyparsing==3.2.3
pytest==8.3.5
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
pytz==2025.2
PyYAML==6.0.2
pyzmq==26.3.0
referencing==0.36.2
requests==2.32.3
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.24.0
scikit-fem==10.0.2
scipy==1.13.1
Send2Trash==1.8.3
six==1.17.0
sniffio==1.3.1
snowballstemmer==2.2.0
soupsieve==2.6
Sphinx==7.4.7
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
stack-data==0.6.3
terminado==0.18.1
tinycss2==1.4.0
tomli==2.2.1
tornado==6.4.2
traitlets==5.14.3
types-python-dateutil==2.9.0.20241206
typing_extensions==4.13.0
tzdata==2025.2
uri-template==1.3.0
urllib3==2.3.0
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
widgetsnbextension==4.0.13
zipp==3.21.0
| name: PyBaMM
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.16
- anyio==4.9.0
- anytree==2.12.1
- argon2-cffi==23.1.0
- argon2-cffi-bindings==21.2.0
- arrow==1.3.0
- asttokens==3.0.0
- async-lru==2.0.5
- attrs==25.3.0
- autograd==1.7.0
- babel==2.17.0
- beautifulsoup4==4.13.3
- black==25.1.0
- bleach==6.2.0
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- click==8.1.8
- comm==0.2.2
- contourpy==1.3.0
- cycler==0.12.1
- debugpy==1.8.13
- decorator==5.2.1
- defusedxml==0.7.1
- docutils==0.21.2
- exceptiongroup==1.2.2
- executing==2.2.0
- fastjsonschema==2.21.1
- flake8==7.2.0
- fonttools==4.56.0
- fqdn==1.5.1
- guzzle-sphinx-theme==0.7.11
- h11==0.14.0
- httpcore==1.0.7
- httpx==0.28.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==8.6.1
- importlib-resources==6.5.2
- iniconfig==2.1.0
- ipykernel==6.29.5
- ipython==8.18.1
- ipywidgets==8.1.5
- isoduration==20.11.0
- jedi==0.19.2
- jinja2==3.1.6
- json5==0.10.0
- jsonpointer==3.0.0
- jsonschema==4.23.0
- jsonschema-specifications==2024.10.1
- jupyter==1.1.1
- jupyter-client==8.6.3
- jupyter-console==6.6.3
- jupyter-core==5.7.2
- jupyter-events==0.12.0
- jupyter-lsp==2.2.5
- jupyter-server==2.15.0
- jupyter-server-terminals==0.5.3
- jupyterlab==4.3.6
- jupyterlab-pygments==0.3.0
- jupyterlab-server==2.27.3
- jupyterlab-widgets==3.0.13
- kiwisolver==1.4.7
- markupsafe==3.0.2
- matplotlib==3.9.4
- matplotlib-inline==0.1.7
- mccabe==0.7.0
- mistune==3.1.3
- mypy-extensions==1.0.0
- nbclient==0.10.2
- nbconvert==7.16.6
- nbformat==5.10.4
- nest-asyncio==1.6.0
- notebook==7.3.3
- notebook-shim==0.2.4
- numpy==2.0.2
- overrides==7.7.0
- packaging==24.2
- pandas==2.2.3
- pandocfilters==1.5.1
- parso==0.8.4
- pathspec==0.12.1
- pexpect==4.9.0
- pillow==11.1.0
- platformdirs==4.3.7
- pluggy==1.5.0
- prometheus-client==0.21.1
- prompt-toolkit==3.0.50
- psutil==7.0.0
- ptyprocess==0.7.0
- pure-eval==0.2.3
- pycodestyle==2.13.0
- pycparser==2.22
- pyflakes==3.3.1
- pygments==2.19.1
- pyparsing==3.2.3
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- python-json-logger==3.3.0
- pytz==2025.2
- pyyaml==6.0.2
- pyzmq==26.3.0
- referencing==0.36.2
- requests==2.32.3
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
- rpds-py==0.24.0
- scikit-fem==10.0.2
- scipy==1.13.1
- send2trash==1.8.3
- six==1.17.0
- sniffio==1.3.1
- snowballstemmer==2.2.0
- soupsieve==2.6
- sphinx==7.4.7
- sphinxcontrib-applehelp==2.0.0
- sphinxcontrib-devhelp==2.0.0
- sphinxcontrib-htmlhelp==2.1.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==2.0.0
- sphinxcontrib-serializinghtml==2.0.0
- stack-data==0.6.3
- terminado==0.18.1
- tinycss2==1.4.0
- tomli==2.2.1
- tornado==6.4.2
- traitlets==5.14.3
- types-python-dateutil==2.9.0.20241206
- typing-extensions==4.13.0
- tzdata==2025.2
- uri-template==1.3.0
- urllib3==2.3.0
- wcwidth==0.2.13
- webcolors==24.11.1
- webencodings==0.5.1
- websocket-client==1.8.0
- widgetsnbextension==4.0.13
- zipp==3.21.0
prefix: /opt/conda/envs/PyBaMM
| [
"tests/unit/test_solvers/test_base_solver.py::TestBaseSolver::test_base_solver_init",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_integrate",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_integrate_failure",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_integrate_with_event",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_model_solver",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_model_solver_ode_with_jacobian",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_model_solver_with_event",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_model_step",
"tests/unit/test_solvers/test_scipy_solver.py::TestScipySolver::test_ode_integrate_with_jacobian"
] | [] | [
"tests/unit/test_solvers/test_base_solver.py::TestBaseSolver::test_step_or_solve_empty_model"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,517 | 4,787 | [
"pybamm/models/full_battery_models/base_battery_model.py",
"pybamm/models/full_battery_models/lead_acid/base_lead_acid_model.py",
"pybamm/solvers/base_solver.py",
"pybamm/solvers/dae_solver.py",
"pybamm/solvers/ode_solver.py",
"pybamm/solvers/scikits_dae_solver.py",
"pybamm/solvers/scikits_ode_solver.py",
"pybamm/solvers/scipy_solver.py"
] |
|
nipy__heudiconv-379 | d31d19d6904d59ca407f5899e405f6de4ba7d00f | 2019-10-03 14:21:10 | cbacd82f0576a71ccb9dd7a59e995a8db4080253 | diff --git a/heudiconv/utils.py b/heudiconv/utils.py
index abea820..2453e9d 100644
--- a/heudiconv/utils.py
+++ b/heudiconv/utils.py
@@ -19,6 +19,11 @@ from nipype.utils.filemanip import which
import logging
lgr = logging.getLogger(__name__)
+if sys.version_info[0] > 2:
+ from json.decoder import JSONDecodeError
+else:
+ JSONDecodeError = ValueError
+
seqinfo_fields = [
'total_files_till_now', # 0
@@ -172,10 +177,15 @@ def load_json(filename):
-------
data : dict
"""
- with open(filename, 'r') as fp:
- data = json.load(fp)
- return data
+ try:
+ with open(filename, 'r') as fp:
+ data = json.load(fp)
+ except JSONDecodeError:
+ lgr.error("{fname} is not a valid json file".format(fname=filename))
+ raise
+ return data
+
def assure_no_file_exists(path):
"""Check if file or symlink (git-annex?) exists, and if so -- remove"""
| enhance explicitness about what json files heudiconv fails to read
<!-- DO NOT DELETE THIS!
This template is used to facilitate issue resolution.
All text in <!-> tags will not be displayed.
-->
### Summary
I was helping a user troubleshoot a problem with heudiconv, as it was tripping up on an invalid json file, but the user only thought heudiconv was touching files in the subject directory inside the bids directory, and not looking in other places inside the bids directory (which it does).
A clue that could have gotten this user on track faster would be to print which file was the invalid json file.
To help, the `load_json` function could be changed to something like:
```
try:
with open(filename, 'r') as fp:
data = json.load(fp)
except json.JSONDecodeError:
print("{fname} is not a valid json file".format(fname=filename)
raise
```
Thoughts on this?
If positive I can open a pull request.
<!-- If you are having conversion troubles, please share as much
relevant information as possible. This includes, but is not limited to:
- log of conversion
- heuristic
-->
### Platform details:
Choose one:
- [ ] Local environment
<!-- If selected, please provide OS and python version -->
- [x] Container
<!-- If selected, please provide container name and tag"-->
- Heudiconv version: 0.5.4
<!-- To check: run heudiconv with just the --version flag -->
| nipy/heudiconv | diff --git a/heudiconv/tests/test_utils.py b/heudiconv/tests/test_utils.py
index 47faec4..ad84673 100644
--- a/heudiconv/tests/test_utils.py
+++ b/heudiconv/tests/test_utils.py
@@ -1,10 +1,16 @@
+import json
import os
import os.path as op
+
from heudiconv.utils import (
get_known_heuristics_with_descriptions,
get_heuristic_description,
load_heuristic,
- json_dumps_pretty)
+ json_dumps_pretty,
+ load_json,
+ create_tree,
+ save_json,
+ JSONDecodeError)
import pytest
from .utils import HEURISTICS_PATH
@@ -58,4 +64,24 @@ def test_json_dumps_pretty():
'Mar 3 2017 10:46:13 by eja'
# just the date which reveals the issue
# tstr = 'Mar 3 2017 10:46:13 by eja'
- assert pretty({'WipMemBlock': tstr}) == '{\n "WipMemBlock": "%s"\n}' % tstr
\ No newline at end of file
+ assert pretty({'WipMemBlock': tstr}) == '{\n "WipMemBlock": "%s"\n}' % tstr
+
+
+def test_load_json(tmp_path, caplog):
+ # test invalid json
+ ifname = 'invalid.json'
+ invalid_json_file = str(tmp_path / ifname)
+ create_tree(str(tmp_path), {ifname: u"I'm Jason Bourne"})
+
+ with pytest.raises(JSONDecodeError):
+ load_json(str(invalid_json_file))
+
+ assert ifname in caplog.text
+
+ # test valid json
+ vcontent = {"secret": "spy"}
+ vfname = "valid.json"
+ valid_json_file = str(tmp_path / vfname)
+ save_json(valid_json_file, vcontent)
+
+ assert load_json(valid_json_file) == vcontent
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 0.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"tinydb",
"requests"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | annexremote==1.6.6
appdirs==1.4.4
attrs==22.2.0
boto==2.49.0
certifi==2021.5.30
cffi==1.15.1
chardet==5.0.0
charset-normalizer==2.0.12
ci-info==0.2.0
click==8.0.4
cryptography==40.0.2
datalad==0.15.6
dcmstack==0.9.0
decorator==4.4.2
Deprecated==1.2.18
etelemetry==0.2.2
fasteners==0.19
filelock==3.4.1
-e git+https://github.com/nipy/heudiconv.git@d31d19d6904d59ca407f5899e405f6de4ba7d00f#egg=heudiconv
humanize==3.14.0
idna==3.10
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
inotify==0.2.10
iso8601==1.1.0
isodate==0.6.1
jeepney==0.7.1
keyring==23.4.1
keyrings.alt==4.1.0
lxml==5.3.1
mock==5.2.0
msgpack==1.0.5
networkx==2.5.1
nibabel==3.2.2
nipype==1.7.1
nose==1.3.7
numpy==1.19.5
packaging==21.3
pathlib==1.0.1
patool==1.12
pluggy==1.0.0
prov==2.0.1
py==1.11.0
pycparser==2.21
pydicom==2.3.1
pydot==1.4.2
PyGithub==1.56
PyJWT==2.4.0
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
python-gitlab==2.10.1
rdflib==5.0.0
requests==2.27.1
requests-toolbelt==1.0.0
scipy==1.5.4
SecretStorage==3.3.3
simplejson==3.20.1
six==1.17.0
swebench-matterhorn @ file:///swebench_matterhorn
tinydb==4.7.0
tomli==1.2.3
tqdm==4.64.1
traits==6.4.1
typing_extensions==4.1.1
urllib3==1.26.20
Whoosh==2.7.4
wrapt==1.16.0
zipp==3.6.0
| name: heudiconv
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- annexremote==1.6.6
- appdirs==1.4.4
- attrs==22.2.0
- boto==2.49.0
- cffi==1.15.1
- chardet==5.0.0
- charset-normalizer==2.0.12
- ci-info==0.2.0
- click==8.0.4
- cryptography==40.0.2
- datalad==0.15.6
- dcmstack==0.9.0
- decorator==4.4.2
- deprecated==1.2.18
- etelemetry==0.2.2
- fasteners==0.19
- filelock==3.4.1
- humanize==3.14.0
- idna==3.10
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- inotify==0.2.10
- iso8601==1.1.0
- isodate==0.6.1
- jeepney==0.7.1
- keyring==23.4.1
- keyrings-alt==4.1.0
- lxml==5.3.1
- mock==5.2.0
- msgpack==1.0.5
- networkx==2.5.1
- nibabel==3.2.2
- nipype==1.7.1
- nose==1.3.7
- numpy==1.19.5
- packaging==21.3
- pathlib==1.0.1
- patool==1.12
- pluggy==1.0.0
- prov==2.0.1
- py==1.11.0
- pycparser==2.21
- pydicom==2.3.1
- pydot==1.4.2
- pygithub==1.56
- pyjwt==2.4.0
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- python-gitlab==2.10.1
- rdflib==5.0.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- scipy==1.5.4
- secretstorage==3.3.3
- simplejson==3.20.1
- six==1.17.0
- swebench-matterhorn==0.0.0
- tinydb==4.7.0
- tomli==1.2.3
- tqdm==4.64.1
- traits==6.4.1
- typing-extensions==4.1.1
- urllib3==1.26.20
- whoosh==2.7.4
- wrapt==1.16.0
- zipp==3.6.0
prefix: /opt/conda/envs/heudiconv
| [
"heudiconv/tests/test_utils.py::test_get_known_heuristics_with_descriptions",
"heudiconv/tests/test_utils.py::test_get_heuristic_description",
"heudiconv/tests/test_utils.py::test_load_heuristic",
"heudiconv/tests/test_utils.py::test_json_dumps_pretty",
"heudiconv/tests/test_utils.py::test_load_json"
] | [] | [] | [] | Apache License 2.0 | 5,518 | 292 | [
"heudiconv/utils.py"
] |
|
ucfopen__canvasapi-306 | 3ef34a89d63aa7461a079bc271944ad1f7ef5099 | 2019-10-03 21:24:36 | 62aa3eb8045f4a5f5edeb16fafad3fdf44bff624 | diff --git a/canvasapi/paginated_list.py b/canvasapi/paginated_list.py
index 21cceaa..90ceade 100644
--- a/canvasapi/paginated_list.py
+++ b/canvasapi/paginated_list.py
@@ -11,6 +11,8 @@ class PaginatedList(object):
def __getitem__(self, index):
assert isinstance(index, (int, slice))
if isinstance(index, int):
+ if index < 0:
+ raise IndexError("Cannot negative index a PaginatedList")
self._get_up_to_index(index)
return self._elements[index]
else:
@@ -105,6 +107,9 @@ class PaginatedList(object):
self._stop = the_slice.stop
self._step = the_slice.step or 1
+ if self._start < 0 or self._stop < 0:
+ raise IndexError("Cannot negative index a PaginatedList slice")
+
def __iter__(self):
index = self._start
while not self._finished(index):
| Negative index on PaginatedList causes unexpected results
# Describe the bug
It is expected that you cannot negative-index the PaginatedList due to its inability to know its length until it is full. However, it is still possible to use the negative index if at least one page of results has been loaded. When this happens, the list returns the last value it has loaded (which is not necessarily the last value it can load).
# To Reproduce
Steps to reproduce the behavior:
```python
users = canvas.get_course(1).get_users()
users[1]
users[-1]
```
# Expected behavior
I am not allowed to use negative indexing (wishlist, I *can* use negative indexing :D)
# Environment information (please complete the following information)
- Python 3.7.0
- CanvasAPI 0.14.0
# Additional context
I'll try my hand at fixing this.
| ucfopen/canvasapi | diff --git a/tests/test_paginated_list.py b/tests/test_paginated_list.py
index 570faa8..99fa473 100644
--- a/tests/test_paginated_list.py
+++ b/tests/test_paginated_list.py
@@ -170,3 +170,36 @@ class TestPaginatedList(unittest.TestCase):
)
self.assertIsInstance(pag_list[0], EnrollmentTerm)
+
+ def test_negative_index(self, m):
+ # Regression test for https://github.com/ucfopen/canvasapi/issues/305
+ # Ensure that we can't use negative indexing, even after loading a page
+
+ register_uris({"paginated_list": ["4_2_pages_p1", "4_2_pages_p2"]}, m)
+ pag_list = PaginatedList(User, self.requester, "GET", "four_objects_two_pages")
+ pag_list[0]
+
+ with self.assertRaises(IndexError):
+ pag_list[-1]
+
+ def test_negative_index_for_slice_start(self, m):
+ # Regression test for https://github.com/ucfopen/canvasapi/issues/305
+ # Ensure that we can't slice using a negative index as the start item
+
+ register_uris({"paginated_list": ["4_2_pages_p1", "4_2_pages_p2"]}, m)
+ pag_list = PaginatedList(User, self.requester, "GET", "four_objects_two_pages")
+ pag_list[0]
+
+ with self.assertRaises(IndexError):
+ pag_list[-1:1]
+
+ def test_negative_index_for_slice_end(self, m):
+ # Regression test for https://github.com/ucfopen/canvasapi/issues/305
+ # Ensure that we can't slice using a negative index as the end item
+
+ register_uris({"paginated_list": ["4_2_pages_p1", "4_2_pages_p2"]}, m)
+ pag_list = PaginatedList(User, self.requester, "GET", "four_objects_two_pages")
+ pag_list[0]
+
+ with self.assertRaises(IndexError):
+ pag_list[:-1]
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 0.14 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"dev_requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
black==22.8.0
-e git+https://github.com/ucfopen/canvasapi.git@3ef34a89d63aa7461a079bc271944ad1f7ef5099#egg=canvasapi
certifi==2021.5.30
cfgv==3.3.1
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
dataclasses==0.8
distlib==0.3.9
docutils==0.15.2
filelock==3.4.1
flake8==5.0.4
identify==2.4.4
idna==3.10
imagesize==1.4.1
importlib-metadata==4.2.0
importlib-resources==5.2.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
mccabe==0.7.0
mypy-extensions==1.0.0
nodeenv==1.6.0
packaging==21.3
pathspec==0.9.0
platformdirs==2.4.0
pluggy==1.0.0
pre-commit==2.17.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytz==2025.2
PyYAML==6.0.1
requests==2.27.1
requests-mock==1.12.1
six==1.17.0
snowballstemmer==2.2.0
Sphinx==4.3.2
sphinx-rtd-theme==1.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml==0.10.2
tomli==1.2.3
typed-ast==1.5.5
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.16.2
zipp==3.6.0
| name: canvasapi
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- black==22.8.0
- cfgv==3.3.1
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- dataclasses==0.8
- distlib==0.3.9
- docutils==0.15.2
- filelock==3.4.1
- flake8==5.0.4
- identify==2.4.4
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.2.0
- importlib-resources==5.2.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- mccabe==0.7.0
- mypy-extensions==1.0.0
- nodeenv==1.6.0
- packaging==21.3
- pathspec==0.9.0
- platformdirs==2.4.0
- pluggy==1.0.0
- pre-commit==2.17.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytz==2025.2
- pyyaml==6.0.1
- requests==2.27.1
- requests-mock==1.12.1
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinx-rtd-theme==1.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- toml==0.10.2
- tomli==1.2.3
- typed-ast==1.5.5
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.16.2
- zipp==3.6.0
prefix: /opt/conda/envs/canvasapi
| [
"tests/test_paginated_list.py::TestPaginatedList::test_negative_index",
"tests/test_paginated_list.py::TestPaginatedList::test_negative_index_for_slice_end",
"tests/test_paginated_list.py::TestPaginatedList::test_negative_index_for_slice_start"
] | [] | [
"tests/test_paginated_list.py::TestPaginatedList::test_getitem_first",
"tests/test_paginated_list.py::TestPaginatedList::test_getitem_second_page",
"tests/test_paginated_list.py::TestPaginatedList::test_iterator",
"tests/test_paginated_list.py::TestPaginatedList::test_paginated_list_empty",
"tests/test_paginated_list.py::TestPaginatedList::test_paginated_list_four_two_pages",
"tests/test_paginated_list.py::TestPaginatedList::test_paginated_list_single",
"tests/test_paginated_list.py::TestPaginatedList::test_paginated_list_six_three_pages",
"tests/test_paginated_list.py::TestPaginatedList::test_paginated_list_two_one_page",
"tests/test_paginated_list.py::TestPaginatedList::test_repr",
"tests/test_paginated_list.py::TestPaginatedList::test_root_element",
"tests/test_paginated_list.py::TestPaginatedList::test_root_element_incorrect",
"tests/test_paginated_list.py::TestPaginatedList::test_slice_beginning",
"tests/test_paginated_list.py::TestPaginatedList::test_slice_end",
"tests/test_paginated_list.py::TestPaginatedList::test_slice_middle",
"tests/test_paginated_list.py::TestPaginatedList::test_slice_out_of_bounds",
"tests/test_paginated_list.py::TestPaginatedList::test_slice_oversize"
] | [] | MIT License | 5,525 | 242 | [
"canvasapi/paginated_list.py"
] |
|
tornadoweb__tornado-2745 | ff985fe509513325462441e017f1ec31dda737c1 | 2019-10-04 01:33:49 | 31088cde10167c96e62fa88883b34582d4b5bfc4 | diff --git a/tornado/routing.py b/tornado/routing.py
index a35ac348..e137a728 100644
--- a/tornado/routing.py
+++ b/tornado/routing.py
@@ -627,7 +627,13 @@ class PathMatches(Matcher):
if ")" in fragment:
paren_loc = fragment.index(")")
if paren_loc >= 0:
- pieces.append("%s" + fragment[paren_loc + 1 :])
+ try:
+ unescaped_fragment = re_unescape(fragment[paren_loc + 1 :])
+ except ValueError:
+ # If we can't unescape part of it, we can't
+ # reverse this url.
+ return (None, None)
+ pieces.append("%s" + unescaped_fragment)
else:
try:
unescaped_fragment = re_unescape(fragment)
| reverse_url() unescapes regex inconsistently
If you have a rule with an escaped regex character coming after a group in parentheses, like:
```python
URLSpec('/project/([0-9]+)/export/codebook\\.csv', ExportCodebookCsv)
```
it will correctly match `/project/1/export/codebook.csv`, but will reverse to `/project/1/export/codebook\.csv`.
Of course, if you remove the backslash:
```python
URLSpec('/project/([0-9]+)/export/codebook.csv', ExportCodebookCsv)
```
it will reverse to the wanted `/project/1/export/codebook.csv` but then it will also match `/project/1/export/codebookXcsv` (the dot matches any character).
This is because `re_unescape()` is only called for the first fragment, not the following ones; see no unescaping line 630:
https://github.com/tornadoweb/tornado/blob/ff985fe509513325462441e017f1ec31dda737c1/tornado/routing.py#L626-L638
See also #1619 | tornadoweb/tornado | diff --git a/tornado/test/web_test.py b/tornado/test/web_test.py
index 5908710a..0832f093 100644
--- a/tornado/test/web_test.py
+++ b/tornado/test/web_test.py
@@ -3107,6 +3107,10 @@ class URLSpecReverseTest(unittest.TestCase):
self.assertEqual(
"/api/v1/foo/bar", url(r"^/api/v1/foo/(\w+)$", None).reverse("bar")
)
+ self.assertEqual(
+ "/api.v1/foo/5/icon.png",
+ url(r"/api\.v1/foo/([0-9]+)/icon\.png", None).reverse(5),
+ )
class RedirectHandlerTest(WebTestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 6.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"flake8",
"black",
"mypy"
],
"pre_install": [
"apt-get update",
"apt-get install -y build-essential python3-dev"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
black==23.3.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
click==8.1.8
flake8==5.0.4
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
mccabe==0.7.0
mypy==1.4.1
mypy-extensions==1.0.0
packaging @ file:///croot/packaging_1671697413597/work
pathspec==0.11.2
platformdirs==4.0.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pyflakes==2.5.0
pytest==7.1.2
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
-e git+https://github.com/tornadoweb/tornado.git@ff985fe509513325462441e017f1ec31dda737c1#egg=tornado
typed-ast==1.5.5
typing_extensions==4.7.1
zipp @ file:///croot/zipp_1672387121353/work
| name: tornado
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- black==23.3.0
- click==8.1.8
- flake8==5.0.4
- importlib-metadata==4.2.0
- mccabe==0.7.0
- mypy==1.4.1
- mypy-extensions==1.0.0
- pathspec==0.11.2
- platformdirs==4.0.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- typed-ast==1.5.5
- typing-extensions==4.7.1
prefix: /opt/conda/envs/tornado
| [
"tornado/test/web_test.py::URLSpecReverseTest::test_reverse_arguments"
] | [] | [
"tornado/test/web_test.py::SecureCookieV1Test::test_arbitrary_bytes",
"tornado/test/web_test.py::SecureCookieV1Test::test_cookie_tampering_future_timestamp",
"tornado/test/web_test.py::SecureCookieV1Test::test_round_trip",
"tornado/test/web_test.py::SecureCookieV2Test::test_key_version_increment_version",
"tornado/test/web_test.py::SecureCookieV2Test::test_key_version_invalidate_version",
"tornado/test/web_test.py::SecureCookieV2Test::test_key_version_roundtrip",
"tornado/test/web_test.py::SecureCookieV2Test::test_key_version_roundtrip_differing_version",
"tornado/test/web_test.py::SecureCookieV2Test::test_round_trip",
"tornado/test/web_test.py::FinalReturnTest::test_finish_method_return_future",
"tornado/test/web_test.py::FinalReturnTest::test_render_method_return_future",
"tornado/test/web_test.py::CookieTest::test_cookie_special_char",
"tornado/test/web_test.py::CookieTest::test_get_cookie",
"tornado/test/web_test.py::CookieTest::test_set_cookie",
"tornado/test/web_test.py::CookieTest::test_set_cookie_domain",
"tornado/test/web_test.py::CookieTest::test_set_cookie_expires_days",
"tornado/test/web_test.py::CookieTest::test_set_cookie_false_flags",
"tornado/test/web_test.py::CookieTest::test_set_cookie_max_age",
"tornado/test/web_test.py::CookieTest::test_set_cookie_overwrite",
"tornado/test/web_test.py::AuthRedirectTest::test_absolute_auth_redirect",
"tornado/test/web_test.py::AuthRedirectTest::test_relative_auth_redirect",
"tornado/test/web_test.py::ConnectionCloseTest::test_connection_close",
"tornado/test/web_test.py::RequestEncodingTest::test_error",
"tornado/test/web_test.py::RequestEncodingTest::test_group_encoding",
"tornado/test/web_test.py::RequestEncodingTest::test_group_question_mark",
"tornado/test/web_test.py::RequestEncodingTest::test_slashes",
"tornado/test/web_test.py::WSGISafeWebTest::test_decode_argument",
"tornado/test/web_test.py::WSGISafeWebTest::test_decode_argument_invalid_unicode",
"tornado/test/web_test.py::WSGISafeWebTest::test_decode_argument_plus",
"tornado/test/web_test.py::WSGISafeWebTest::test_get_argument",
"tornado/test/web_test.py::WSGISafeWebTest::test_get_body_arguments",
"tornado/test/web_test.py::WSGISafeWebTest::test_get_query_arguments",
"tornado/test/web_test.py::WSGISafeWebTest::test_header_injection",
"tornado/test/web_test.py::WSGISafeWebTest::test_multi_header",
"tornado/test/web_test.py::WSGISafeWebTest::test_no_gzip",
"tornado/test/web_test.py::WSGISafeWebTest::test_optional_path",
"tornado/test/web_test.py::WSGISafeWebTest::test_redirect",
"tornado/test/web_test.py::WSGISafeWebTest::test_reverse_url",
"tornado/test/web_test.py::WSGISafeWebTest::test_types",
"tornado/test/web_test.py::WSGISafeWebTest::test_uimodule_resources",
"tornado/test/web_test.py::WSGISafeWebTest::test_uimodule_unescaped",
"tornado/test/web_test.py::WSGISafeWebTest::test_web_redirect",
"tornado/test/web_test.py::WSGISafeWebTest::test_web_redirect_double_slash",
"tornado/test/web_test.py::NonWSGIWebTests::test_empty_flush",
"tornado/test/web_test.py::ErrorResponseTest::test_default",
"tornado/test/web_test.py::ErrorResponseTest::test_failed_write_error",
"tornado/test/web_test.py::ErrorResponseTest::test_write_error",
"tornado/test/web_test.py::StaticFileTest::test_absolute_static_url",
"tornado/test/web_test.py::StaticFileTest::test_absolute_version_exclusion",
"tornado/test/web_test.py::StaticFileTest::test_include_host_override",
"tornado/test/web_test.py::StaticFileTest::test_path_traversal_protection",
"tornado/test/web_test.py::StaticFileTest::test_relative_version_exclusion",
"tornado/test/web_test.py::StaticFileTest::test_root_static_path",
"tornado/test/web_test.py::StaticFileTest::test_static_304_etag_modified_bug",
"tornado/test/web_test.py::StaticFileTest::test_static_304_if_modified_since",
"tornado/test/web_test.py::StaticFileTest::test_static_304_if_none_match",
"tornado/test/web_test.py::StaticFileTest::test_static_404",
"tornado/test/web_test.py::StaticFileTest::test_static_compressed_files",
"tornado/test/web_test.py::StaticFileTest::test_static_etag",
"tornado/test/web_test.py::StaticFileTest::test_static_files",
"tornado/test/web_test.py::StaticFileTest::test_static_head",
"tornado/test/web_test.py::StaticFileTest::test_static_head_range",
"tornado/test/web_test.py::StaticFileTest::test_static_if_modified_since_pre_epoch",
"tornado/test/web_test.py::StaticFileTest::test_static_if_modified_since_time_zone",
"tornado/test/web_test.py::StaticFileTest::test_static_invalid_range",
"tornado/test/web_test.py::StaticFileTest::test_static_range_if_none_match",
"tornado/test/web_test.py::StaticFileTest::test_static_unsatisfiable_range_end_less_than_start",
"tornado/test/web_test.py::StaticFileTest::test_static_unsatisfiable_range_invalid_start",
"tornado/test/web_test.py::StaticFileTest::test_static_unsatisfiable_range_zero_suffix",
"tornado/test/web_test.py::StaticFileTest::test_static_url",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_end_edge",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_full_file",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_full_past_end",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_neg_end",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_neg_past_start",
"tornado/test/web_test.py::StaticFileTest::test_static_with_range_partial_past_end",
"tornado/test/web_test.py::StaticDefaultFilenameTest::test_static_default_filename",
"tornado/test/web_test.py::StaticDefaultFilenameTest::test_static_default_redirect",
"tornado/test/web_test.py::StaticFileWithPathTest::test_serve",
"tornado/test/web_test.py::CustomStaticFileTest::test_serve",
"tornado/test/web_test.py::CustomStaticFileTest::test_static_url",
"tornado/test/web_test.py::HostMatchingTest::test_host_matching",
"tornado/test/web_test.py::DefaultHostMatchingTest::test_default_host_matching",
"tornado/test/web_test.py::NamedURLSpecGroupsTest::test_named_urlspec_groups",
"tornado/test/web_test.py::ClearHeaderTest::test_clear_header",
"tornado/test/web_test.py::Header204Test::test_204_headers",
"tornado/test/web_test.py::Header304Test::test_304_headers",
"tornado/test/web_test.py::StatusReasonTest::test_status",
"tornado/test/web_test.py::DateHeaderTest::test_date_header",
"tornado/test/web_test.py::RaiseWithReasonTest::test_httperror_str",
"tornado/test/web_test.py::RaiseWithReasonTest::test_httperror_str_from_httputil",
"tornado/test/web_test.py::RaiseWithReasonTest::test_raise_with_reason",
"tornado/test/web_test.py::ErrorHandlerXSRFTest::test_404_xsrf",
"tornado/test/web_test.py::ErrorHandlerXSRFTest::test_error_xsrf",
"tornado/test/web_test.py::GzipTestCase::test_gzip",
"tornado/test/web_test.py::GzipTestCase::test_gzip_not_requested",
"tornado/test/web_test.py::GzipTestCase::test_gzip_static",
"tornado/test/web_test.py::GzipTestCase::test_vary_already_present",
"tornado/test/web_test.py::GzipTestCase::test_vary_already_present_multiple",
"tornado/test/web_test.py::PathArgsInPrepareTest::test_kw",
"tornado/test/web_test.py::PathArgsInPrepareTest::test_pos",
"tornado/test/web_test.py::ClearAllCookiesTest::test_clear_all_cookies",
"tornado/test/web_test.py::ExceptionHandlerTest::test_http_error",
"tornado/test/web_test.py::ExceptionHandlerTest::test_known_error",
"tornado/test/web_test.py::ExceptionHandlerTest::test_unknown_error",
"tornado/test/web_test.py::BuggyLoggingTest::test_buggy_log_exception",
"tornado/test/web_test.py::UIMethodUIModuleTest::test_ui_method",
"tornado/test/web_test.py::GetArgumentErrorTest::test_catch_error",
"tornado/test/web_test.py::SetLazyPropertiesTest::test_set_properties",
"tornado/test/web_test.py::GetCurrentUserTest::test_get_current_user_from_ui_module_is_lazy",
"tornado/test/web_test.py::GetCurrentUserTest::test_get_current_user_from_ui_module_works",
"tornado/test/web_test.py::GetCurrentUserTest::test_get_current_user_works",
"tornado/test/web_test.py::UnimplementedHTTPMethodsTest::test_unimplemented_standard_methods",
"tornado/test/web_test.py::UnimplementedNonStandardMethodsTest::test_unimplemented_other",
"tornado/test/web_test.py::UnimplementedNonStandardMethodsTest::test_unimplemented_patch",
"tornado/test/web_test.py::AllHTTPMethodsTest::test_standard_methods",
"tornado/test/web_test.py::PatchMethodTest::test_other",
"tornado/test/web_test.py::PatchMethodTest::test_patch",
"tornado/test/web_test.py::FinishInPrepareTest::test_finish_in_prepare",
"tornado/test/web_test.py::Default404Test::test_404",
"tornado/test/web_test.py::Custom404Test::test_404",
"tornado/test/web_test.py::DefaultHandlerArgumentsTest::test_403",
"tornado/test/web_test.py::HandlerByNameTest::test_handler_by_name",
"tornado/test/web_test.py::StreamingRequestBodyTest::test_close_during_upload",
"tornado/test/web_test.py::StreamingRequestBodyTest::test_early_return",
"tornado/test/web_test.py::StreamingRequestBodyTest::test_early_return_with_data",
"tornado/test/web_test.py::StreamingRequestBodyTest::test_streaming_body",
"tornado/test/web_test.py::DecoratedStreamingRequestFlowControlTest::test_flow_control_chunked_body",
"tornado/test/web_test.py::DecoratedStreamingRequestFlowControlTest::test_flow_control_compressed_body",
"tornado/test/web_test.py::DecoratedStreamingRequestFlowControlTest::test_flow_control_fixed_body",
"tornado/test/web_test.py::NativeStreamingRequestFlowControlTest::test_flow_control_chunked_body",
"tornado/test/web_test.py::NativeStreamingRequestFlowControlTest::test_flow_control_compressed_body",
"tornado/test/web_test.py::NativeStreamingRequestFlowControlTest::test_flow_control_fixed_body",
"tornado/test/web_test.py::IncorrectContentLengthTest::test_content_length_too_high",
"tornado/test/web_test.py::IncorrectContentLengthTest::test_content_length_too_low",
"tornado/test/web_test.py::ClientCloseTest::test_client_close",
"tornado/test/web_test.py::SignedValueTest::test_expired",
"tornado/test/web_test.py::SignedValueTest::test_key_version_retrieval",
"tornado/test/web_test.py::SignedValueTest::test_key_versioning_invalid_key",
"tornado/test/web_test.py::SignedValueTest::test_key_versioning_read_write_default_key",
"tornado/test/web_test.py::SignedValueTest::test_key_versioning_read_write_non_default_key",
"tornado/test/web_test.py::SignedValueTest::test_known_values",
"tornado/test/web_test.py::SignedValueTest::test_name_swap",
"tornado/test/web_test.py::SignedValueTest::test_non_ascii",
"tornado/test/web_test.py::SignedValueTest::test_payload_tampering",
"tornado/test/web_test.py::SignedValueTest::test_signature_tampering",
"tornado/test/web_test.py::XSRFTest::test_cross_user",
"tornado/test/web_test.py::XSRFTest::test_distinct_tokens",
"tornado/test/web_test.py::XSRFTest::test_refresh_token",
"tornado/test/web_test.py::XSRFTest::test_versioning",
"tornado/test/web_test.py::XSRFTest::test_xsrf_fail_argument_invalid_format",
"tornado/test/web_test.py::XSRFTest::test_xsrf_fail_body_no_cookie",
"tornado/test/web_test.py::XSRFTest::test_xsrf_fail_cookie_invalid_format",
"tornado/test/web_test.py::XSRFTest::test_xsrf_fail_cookie_no_body",
"tornado/test/web_test.py::XSRFTest::test_xsrf_fail_no_token",
"tornado/test/web_test.py::XSRFTest::test_xsrf_success_header",
"tornado/test/web_test.py::XSRFTest::test_xsrf_success_non_hex_token",
"tornado/test/web_test.py::XSRFTest::test_xsrf_success_post_body",
"tornado/test/web_test.py::XSRFTest::test_xsrf_success_query_string",
"tornado/test/web_test.py::XSRFTest::test_xsrf_success_short_token",
"tornado/test/web_test.py::XSRFCookieKwargsTest::test_xsrf_httponly",
"tornado/test/web_test.py::FinishExceptionTest::test_finish_exception",
"tornado/test/web_test.py::DecoratorTest::test_addslash",
"tornado/test/web_test.py::DecoratorTest::test_removeslash",
"tornado/test/web_test.py::CacheTest::test_multiple_strong_etag_match",
"tornado/test/web_test.py::CacheTest::test_multiple_strong_etag_not_match",
"tornado/test/web_test.py::CacheTest::test_multiple_weak_etag_match",
"tornado/test/web_test.py::CacheTest::test_multiple_weak_etag_not_match",
"tornado/test/web_test.py::CacheTest::test_strong_etag_match",
"tornado/test/web_test.py::CacheTest::test_strong_etag_not_match",
"tornado/test/web_test.py::CacheTest::test_weak_etag_match",
"tornado/test/web_test.py::CacheTest::test_weak_etag_not_match",
"tornado/test/web_test.py::CacheTest::test_wildcard_etag",
"tornado/test/web_test.py::RequestSummaryTest::test_missing_remote_ip",
"tornado/test/web_test.py::HTTPErrorTest::test_copy",
"tornado/test/web_test.py::ApplicationTest::test_listen",
"tornado/test/web_test.py::URLSpecReverseTest::test_non_reversible",
"tornado/test/web_test.py::URLSpecReverseTest::test_reverse",
"tornado/test/web_test.py::RedirectHandlerTest::test_basic_redirect",
"tornado/test/web_test.py::RedirectHandlerTest::test_redirect_pattern",
"tornado/test/web_test.py::RedirectHandlerTest::test_redirect_with_appending_argument",
"tornado/test/web_test.py::RedirectHandlerTest::test_redirect_with_argument"
] | [] | Apache License 2.0 | 5,526 | 204 | [
"tornado/routing.py"
] |
|
roaldnefs__salt-lint-18 | 025a8e7dc451ed76f34cdb25a73146ab6efe1be1 | 2019-10-04 22:46:55 | 616f57796a6c0a0f18a7939a8bb3ce6b8d4915ac | diff --git a/lib/saltlint/rules/JinjaCommentHasSpacesRule.py b/lib/saltlint/rules/JinjaCommentHasSpacesRule.py
index 7b7c468..2253cdb 100644
--- a/lib/saltlint/rules/JinjaCommentHasSpacesRule.py
+++ b/lib/saltlint/rules/JinjaCommentHasSpacesRule.py
@@ -14,7 +14,7 @@ class JinjaCommentHasSpacesRule(SaltLintRule):
tags = ['formatting']
version_added = 'v0.0.5'
- bracket_regex = re.compile(r"{#[^ -]|{#-[^ ]|[^ -]#}|[^ ]-#}")
+ bracket_regex = re.compile(r"{#[^ \-\+]|{#[\-\+][^ ]|[^ \-\+]#}|[^ ][\-\+]#}")
def match(self, file, line):
return self.bracket_regex.search(line)
diff --git a/lib/saltlint/rules/JinjaStatementHasSpacesRule.py b/lib/saltlint/rules/JinjaStatementHasSpacesRule.py
index 327cca7..80fa45f 100644
--- a/lib/saltlint/rules/JinjaStatementHasSpacesRule.py
+++ b/lib/saltlint/rules/JinjaStatementHasSpacesRule.py
@@ -14,7 +14,7 @@ class JinjaStatementHasSpacesRule(SaltLintRule):
tags = ['formatting']
version_added = 'v0.0.2'
- bracket_regex = re.compile(r"{%[^ -]|{%-[^ ]|[^ -]%}|[^ ]-%}")
+ bracket_regex = re.compile(r"{%[^ \-\+]|{%[\-\+][^ ]|[^ \-\+]%}|[^ ][\-\+]%}")
def match(self, file, line):
return self.bracket_regex.search(line)
diff --git a/lib/saltlint/rules/JinjaVariableHasSpacesRule.py b/lib/saltlint/rules/JinjaVariableHasSpacesRule.py
index fe808f8..67da64d 100644
--- a/lib/saltlint/rules/JinjaVariableHasSpacesRule.py
+++ b/lib/saltlint/rules/JinjaVariableHasSpacesRule.py
@@ -14,7 +14,7 @@ class JinjaVariableHasSpacesRule(SaltLintRule):
tags = ['formatting']
version_added = 'v0.0.1'
- bracket_regex = re.compile(r"{{[^ -]|{{-[^ ]|[^ -]}}|[^ ]-}}")
+ bracket_regex = re.compile(r"{{[^ \-\+]|{{[-\+][^ ]|[^ \-\+]}}|[^ ][-\+]}}")
def match(self, file, line):
return self.bracket_regex.search(line)
| Fix whitespace control in Jinja rules
Currently only the `-` character is accepted by the Jinja whitespace rules but technically `+` should be supported as well, i.e.:
{%+ ... +%}
{{+ ... +}}
{#+ ... +#}
That's mentioned in the [upstream documentation](https://jinja.palletsprojects.com/en/2.10.x/templates/#whitespace-control) as well.
Thanks @myii, for mentioning this is
https://github.com/roaldnefs/salt-lint/pull/13#issuecomment-538547711! | roaldnefs/salt-lint | diff --git a/tests/unit/TestJinjaCommentHasSpaces.py b/tests/unit/TestJinjaCommentHasSpaces.py
index 16ddb2f..6e2ae54 100644
--- a/tests/unit/TestJinjaCommentHasSpaces.py
+++ b/tests/unit/TestJinjaCommentHasSpaces.py
@@ -10,11 +10,11 @@ from tests import RunFromText
GOOD_COMMENT_LINE = '''
-{#- set example='good' -#}
+{#- set example='good' +#}
'''
BAD_COMMENT_LINE = '''
-{#-set example='bad'-#}
+{#-set example='bad'+#}
'''
class TestLineTooLongRule(unittest.TestCase):
diff --git a/tests/unit/TestJinjaStatementHasSpaces.py b/tests/unit/TestJinjaStatementHasSpaces.py
index 56f59c8..cf81c2e 100644
--- a/tests/unit/TestJinjaStatementHasSpaces.py
+++ b/tests/unit/TestJinjaStatementHasSpaces.py
@@ -10,11 +10,11 @@ from tests import RunFromText
GOOD_STATEMENT_LINE = '''
-{%- set example='good' -%}
+{%- set example='good' +%}
'''
BAD_STATEMENT_LINE = '''
-{%-set example='bad'-%}
+{%-set example='bad'+%}
'''
class TestLineTooLongRule(unittest.TestCase):
diff --git a/tests/unit/TestJinjaVariableHasSpaces.py b/tests/unit/TestJinjaVariableHasSpaces.py
index 9c537e0..5e27b09 100644
--- a/tests/unit/TestJinjaVariableHasSpaces.py
+++ b/tests/unit/TestJinjaVariableHasSpaces.py
@@ -10,11 +10,11 @@ from tests import RunFromText
GOOD_VARIABLE_LINE = '''
-{{- variable -}}
+{{- variable +}}
'''
BAD_VARIABLE_LINE = '''
-{{-variable-}}
+{{-variable+}}
'''
class TestLineTooLongRule(unittest.TestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 3
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"nose",
"flake8",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
certifi==2021.5.30
charset-normalizer==2.0.12
contextvars==2.4
distro==1.9.0
flake8==5.0.4
idna==3.10
immutables==0.19
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
jmespath==0.10.0
looseversion==1.3.0
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
msgpack==1.0.5
nose==1.3.7
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pycryptodomex==3.21.0
pyflakes==2.5.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
PyYAML==6.0.1
pyzmq==25.0.2
requests==2.27.1
salt==3006.6
-e git+https://github.com/roaldnefs/salt-lint.git@025a8e7dc451ed76f34cdb25a73146ab6efe1be1#egg=salt_lint
six==1.17.0
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: salt-lint
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- charset-normalizer==2.0.12
- contextvars==2.4
- distro==1.9.0
- flake8==5.0.4
- idna==3.10
- immutables==0.19
- importlib-metadata==4.2.0
- jinja2==3.0.3
- jmespath==0.10.0
- looseversion==1.3.0
- markupsafe==2.0.1
- mccabe==0.7.0
- msgpack==1.0.5
- nose==1.3.7
- psutil==7.0.0
- pycodestyle==2.9.1
- pycryptodomex==3.21.0
- pyflakes==2.5.0
- pyyaml==6.0.1
- pyzmq==25.0.2
- requests==2.27.1
- salt==3006.6
- six==1.17.0
- urllib3==1.26.20
prefix: /opt/conda/envs/salt-lint
| [
"tests/unit/TestJinjaCommentHasSpaces.py::TestLineTooLongRule::test_comment_positive",
"tests/unit/TestJinjaStatementHasSpaces.py::TestLineTooLongRule::test_statement_positive",
"tests/unit/TestJinjaVariableHasSpaces.py::TestLineTooLongRule::test_statement_positive"
] | [] | [
"tests/unit/TestJinjaCommentHasSpaces.py::TestLineTooLongRule::test_comment_negative",
"tests/unit/TestJinjaStatementHasSpaces.py::TestLineTooLongRule::test_statement_negative",
"tests/unit/TestJinjaVariableHasSpaces.py::TestLineTooLongRule::test_statement_negative"
] | [] | MIT License | 5,531 | 627 | [
"lib/saltlint/rules/JinjaCommentHasSpacesRule.py",
"lib/saltlint/rules/JinjaStatementHasSpacesRule.py",
"lib/saltlint/rules/JinjaVariableHasSpacesRule.py"
] |
|
markusressel__container-app-conf-19 | 64bf901676d19ecae5a78308d89c057e3b511a78 | 2019-10-05 00:40:52 | 64e0448c5a66680391958a2f1ff2b76de255ac2c | diff --git a/container_app_conf/entry/float.py b/container_app_conf/entry/float.py
index 6ddb2ca..ee85223 100644
--- a/container_app_conf/entry/float.py
+++ b/container_app_conf/entry/float.py
@@ -17,6 +17,7 @@
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
+from py_range_parse import Range
from container_app_conf import ConfigEntry
@@ -24,6 +25,11 @@ from container_app_conf import ConfigEntry
class FloatConfigEntry(ConfigEntry):
_example = "3.1415926535"
+ def __init__(self, key_path: [str], example: any = None, description: str or None = None, default: any = None,
+ none_allowed: bool = None, range: Range = None):
+ self.range = range
+ super().__init__(key_path, example, description, default, none_allowed)
+
def _value_to_type(self, value: any) -> float or None:
"""
Tries to permissively convert the given value to a float.
@@ -31,9 +37,13 @@ class FloatConfigEntry(ConfigEntry):
:return: the parsed float value
"""
if isinstance(value, str) and '%' == value[-1]:
- return float(value[:-1]) / 100.0
+ parsed_value = float(value[:-1]) / 100.0
else:
- return float(value)
+ parsed_value = float(value)
+
+ if self.range is not None and parsed_value not in self.range:
+ self._raise_invalid_value(value)
+ return parsed_value
def _type_to_value(self, type: any) -> any:
return float(type)
diff --git a/container_app_conf/entry/int.py b/container_app_conf/entry/int.py
index 3e50310..beb6ad1 100644
--- a/container_app_conf/entry/int.py
+++ b/container_app_conf/entry/int.py
@@ -17,6 +17,7 @@
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
+from py_range_parse import Range
from container_app_conf import ConfigEntry
@@ -24,13 +25,21 @@ from container_app_conf import ConfigEntry
class IntConfigEntry(ConfigEntry):
_example = "42"
+ def __init__(self, key_path: [str], example: any = None, description: str or None = None, default: any = None,
+ none_allowed: bool = None, range: Range = None):
+ self.range = range
+ super().__init__(key_path, example, description, default, none_allowed)
+
def _value_to_type(self, value: any) -> int or None:
"""
Tries to permissively convert the given value to an int.
:param value: the value to parse
:return: the parsed int value
"""
- return int(value)
+ parsed_value = int(value)
+ if self.range is not None and parsed_value not in self.range:
+ self._raise_invalid_value(value)
+ return parsed_value
def _type_to_value(self, type: any) -> any:
return int(type)
| limit range of IntConfigEntry using range parameter
Add a constructor parameter to `IntConfigEntry` and `FloatConfigEntry` to limit valid input to a specific range. | markusressel/container-app-conf | diff --git a/tests/entry_test.py b/tests/entry_test.py
index 820d586..456e6d6 100644
--- a/tests/entry_test.py
+++ b/tests/entry_test.py
@@ -53,18 +53,19 @@ class EntryTest(EntryTestBase):
self.assert_input_output(config_entry, input_output)
def test_int_entry(self):
- config_entry = IntConfigEntry(key_path=["int"])
+ config_entry = IntConfigEntry(key_path=["int"], range=Range(-5, 5))
input_output = [
("5", 5),
(5, 5),
("-3", -3),
- (-3, -3)
+ (-3, -3),
+ (-6, ValueError)
]
self.assert_input_output(config_entry, input_output)
def test_float_entry(self):
- config_entry = FloatConfigEntry(key_path=["float"])
+ config_entry = FloatConfigEntry(key_path=["float"], range=Range(-10.0, 10))
input_output = [
("5", 5.0),
(5, 5.0),
@@ -72,7 +73,8 @@ class EntryTest(EntryTestBase):
(-3.0, -3.0),
(1.2, 1.2),
("1.2", 1.2),
- ("3%", 0.03)
+ ("3%", 0.03),
+ (-20.0, ValueError)
]
self.assert_input_output(config_entry, input_output)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 3.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
-e git+https://github.com/markusressel/container-app-conf.git@64bf901676d19ecae5a78308d89c057e3b511a78#egg=container_app_conf
importlib-metadata==4.8.3
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
py-range-parse==1.0.5
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
pytimeparse==1.1.8
PyYAML==6.0.1
six==1.17.0
toml==0.10.2
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: container-app-conf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- py-range-parse==1.0.5
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytimeparse==1.1.8
- pyyaml==6.0.1
- six==1.17.0
- toml==0.10.2
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/container-app-conf
| [
"tests/entry_test.py::EntryTest::test_float_entry",
"tests/entry_test.py::EntryTest::test_int_entry"
] | [] | [
"tests/entry_test.py::EntryTest::test_bool_entry",
"tests/entry_test.py::EntryTest::test_date_entry",
"tests/entry_test.py::EntryTest::test_directory_entry",
"tests/entry_test.py::EntryTest::test_file_entry",
"tests/entry_test.py::EntryTest::test_range_entry",
"tests/entry_test.py::EntryTest::test_timedelta_entry"
] | [] | MIT License | 5,533 | 802 | [
"container_app_conf/entry/float.py",
"container_app_conf/entry/int.py"
] |
|
markusressel__container-app-conf-22 | 64e0448c5a66680391958a2f1ff2b76de255ac2c | 2019-10-05 20:33:22 | 64e0448c5a66680391958a2f1ff2b76de255ac2c | diff --git a/container_app_conf/entry/string.py b/container_app_conf/entry/string.py
index ce12615..e2d2a6a 100644
--- a/container_app_conf/entry/string.py
+++ b/container_app_conf/entry/string.py
@@ -17,6 +17,7 @@
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
+import re
from container_app_conf import ConfigEntry
@@ -24,6 +25,11 @@ from container_app_conf import ConfigEntry
class StringConfigEntry(ConfigEntry):
_example = "text"
+ def __init__(self, key_path: [str], example: any = None, description: str or None = None, default: any = None,
+ none_allowed: bool = None, regex: str = None):
+ self.regex = re.compile(regex) if regex is not None else None
+ super().__init__(key_path, example, description, default, none_allowed)
+
def _value_to_type(self, value: any) -> str or None:
"""
Converts the given type to the expected type
@@ -35,4 +41,8 @@ class StringConfigEntry(ConfigEntry):
if s.lower() in ['none', 'null', 'nil']:
return None
+ if self.regex is not None:
+ if not self.regex.match(s):
+ self._raise_invalid_value(s)
+
return s
| Restrict valid input of StringConfigEntry using regex
As the title states. | markusressel/container-app-conf | diff --git a/tests/entry_test.py b/tests/entry_test.py
index 80c6a8c..ab36331 100644
--- a/tests/entry_test.py
+++ b/tests/entry_test.py
@@ -48,6 +48,17 @@ class EntryTest(EntryTestBase):
self.assert_input_output(config_entry, input_output)
+ def test_string_entry_regex(self):
+ config_entry = StringConfigEntry(key_path=["string"], none_allowed=True, regex="^[0-9]*$")
+
+ input_output = [
+ ("5", "5"),
+ ("hello", ValueError),
+ ("$stuff=)(&/%$§", ValueError),
+ ]
+
+ self.assert_input_output(config_entry, input_output)
+
def test_regex_entry(self):
config_entry = RegexConfigEntry(key_path=["regex"], none_allowed=True)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 3.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
-e git+https://github.com/markusressel/container-app-conf.git@64e0448c5a66680391958a2f1ff2b76de255ac2c#egg=container_app_conf
exceptiongroup==1.2.2
importlib-metadata==6.7.0
iniconfig==2.0.0
packaging==24.0
pluggy==1.2.0
py-range-parse==1.0.5
pytest==7.4.4
python-dateutil==2.9.0.post0
pytimeparse==1.1.8
PyYAML==6.0.1
six==1.17.0
toml==0.10.2
tomli==2.0.1
typing_extensions==4.7.1
zipp==3.15.0
| name: container-app-conf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- packaging==24.0
- pluggy==1.2.0
- py-range-parse==1.0.5
- pytest==7.4.4
- python-dateutil==2.9.0.post0
- pytimeparse==1.1.8
- pyyaml==6.0.1
- six==1.17.0
- toml==0.10.2
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/container-app-conf
| [
"tests/entry_test.py::EntryTest::test_string_entry_regex"
] | [] | [
"tests/entry_test.py::EntryTest::test_bool_entry",
"tests/entry_test.py::EntryTest::test_date_entry",
"tests/entry_test.py::EntryTest::test_directory_entry",
"tests/entry_test.py::EntryTest::test_file_entry",
"tests/entry_test.py::EntryTest::test_float_entry",
"tests/entry_test.py::EntryTest::test_int_entry",
"tests/entry_test.py::EntryTest::test_range_entry",
"tests/entry_test.py::EntryTest::test_regex_entry",
"tests/entry_test.py::EntryTest::test_string_entry",
"tests/entry_test.py::EntryTest::test_timedelta_entry"
] | [] | MIT License | 5,536 | 348 | [
"container_app_conf/entry/string.py"
] |
|
MycroftAI__lingua-franca-32 | 6c43ca67438f14891930ec14e56572f2c2815427 | 2019-10-06 16:19:53 | 6c43ca67438f14891930ec14e56572f2c2815427 | diff --git a/lingua_franca/lang/parse_es.py b/lingua_franca/lang/parse_es.py
index bebda23..d2ebea9 100644
--- a/lingua_franca/lang/parse_es.py
+++ b/lingua_franca/lang/parse_es.py
@@ -20,7 +20,8 @@
"""
from datetime import datetime
from dateutil.relativedelta import relativedelta
-from lingua_franca.lang.parse_common import is_numeric, look_for_fractions
+from lingua_franca.lang.format_es import pronounce_number_es
+from lingua_franca.lang.parse_common import *
from lingua_franca.lang.common_data_es import _ARTICLES_ES, _NUM_STRING_ES
@@ -57,7 +58,12 @@ def isFractional_es(input_str):
return False
-def extractnumber_es(text):
+# TODO: short_scale and ordinals don't do anything here.
+# The parameters are present in the function signature for API compatibility
+# reasons.
+#
+# Returns incorrect output on certain fractional phrases like, "cuarto de dos"
+def extractnumber_es(text, short_scale=True, ordinals=False):
"""
This function prepares the given text for parsing by making
numbers consistent, getting rid of contractions, etc.
@@ -108,7 +114,7 @@ def extractnumber_es(text):
result = 0
# handle fractions
if next_word != "avos":
- result += val
+ result = val
else:
result = float(result) / float(val)
@@ -263,6 +269,24 @@ def es_number_parse(words, i):
return es_number(i)
+def extract_numbers_es(text, short_scale=True, ordinals=False):
+ """
+ Takes in a string and extracts a list of numbers.
+
+ Args:
+ text (str): the string to extract a number from
+ short_scale (bool): Use "short scale" or "long scale" for large
+ numbers -- over a million. The default is short scale, which
+ is now common in most English speaking countries.
+ See https://en.wikipedia.org/wiki/Names_of_large_numbers
+ ordinals (bool): consider ordinal numbers, e.g. third=3 instead of 1/3
+ Returns:
+ list: list of extracted numbers as floats
+ """
+ return extract_numbers_generic(text, pronounce_number_es, extractnumber_es,
+ short_scale=short_scale, ordinals=ordinals)
+
+
def normalize_es(text, remove_articles):
""" Spanish string normalization """
diff --git a/lingua_franca/parse.py b/lingua_franca/parse.py
index 69b803d..bcb521f 100644
--- a/lingua_franca/parse.py
+++ b/lingua_franca/parse.py
@@ -105,6 +105,8 @@ def extract_numbers(text, short_scale=True, ordinals=False, lang=None):
return extract_numbers_it(text, short_scale, ordinals)
elif lang_code == "da":
return extract_numbers_da(text, short_scale, ordinals)
+ elif lang_code == "es":
+ return extract_numbers_es(text, short_scale, ordinals)
# TODO: extractnumbers_xx for other languages
_log_unsupported_language(lang_code,
['en', 'it', 'fr', 'de', 'da'])
@@ -145,8 +147,9 @@ def extract_number(text, short_scale=True, ordinals=False, lang=None):
return extractnumber_de(text)
elif lang_code == "da":
return extractnumber_da(text)
+ elif lang_code == "es":
+ return extract_numbers_es(text, short_scale, ordinals)
elif lang_code == "nl":
- print("EXTRACTING NL")
return extractnumber_nl(text, short_scale=short_scale,
ordinals=ordinals)
# TODO: extractnumber_xx for other languages
| PR#2347 Fix extractnumber_es, add extract_numbers_es
https://github.com/MycroftAI/mycroft-core/pull/2347
- Fix bug causing extractnumber_es to return a sum instead of a list
- Add Spanish parser to extract_numbers and extract_number
| MycroftAI/lingua-franca | diff --git a/test/test_parse_es.py b/test/test_parse_es.py
index cb92e31..34e7472 100644
--- a/test/test_parse_es.py
+++ b/test/test_parse_es.py
@@ -16,13 +16,14 @@
#
import unittest
-from lingua_franca.parse import normalize
+from lingua_franca.parse import normalize, extract_numbers, extract_number
class TestNormalize(unittest.TestCase):
"""
Test cases for Spanish parsing
"""
+
def test_articles_es(self):
self.assertEqual(normalize("esta es la prueba", lang="es",
remove_articles=True),
@@ -76,6 +77,15 @@ class TestNormalize(unittest.TestCase):
lang="es"),
"999999")
+ def test_extract_number_es(self):
+ self.assertEqual(sorted(extract_numbers(
+ "1 7 cuatro catorce ocho 157", lang='es')), [1, 4, 7, 8, 14, 157])
+ self.assertEqual(sorted(extract_numbers(
+ "1 7 cuatro albuquerque naranja John Doe catorce ocho 157",
+ lang='es')), [1, 4, 7, 8, 14, 157])
+ self.assertEqual(extract_number("seis punto dos", lang='es'), 6.2)
+ self.assertEqual(extract_numbers("un medio", lang='es'), [0.5])
+
if __name__ == "__main__":
unittest.main()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 2
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/MycroftAI/lingua-franca.git@6c43ca67438f14891930ec14e56572f2c2815427#egg=lingua_franca
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.6.0
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: lingua-franca
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.6.0
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/lingua-franca
| [
"test/test_parse_es.py::TestNormalize::test_extract_number_es"
] | [] | [
"test/test_parse_es.py::TestNormalize::test_articles_es",
"test/test_parse_es.py::TestNormalize::test_numbers_es"
] | [] | Apache License 2.0 | 5,542 | 912 | [
"lingua_franca/lang/parse_es.py",
"lingua_franca/parse.py"
] |
|
asottile__pyupgrade-208 | a5dc2d6c367df8bdac783c4d00cf6dc459f797cf | 2019-10-07 02:53:50 | 13abf9d4693086683ccd6e95252a5c05377ae5c5 | diff --git a/pyupgrade.py b/pyupgrade.py
index d660a6d..fd897bc 100644
--- a/pyupgrade.py
+++ b/pyupgrade.py
@@ -1184,6 +1184,7 @@ class FindPy3Plus(ast.NodeVisitor):
self.six_add_metaclass = set() # type: Set[ast.ClassDef]
self.six_b = set() # type: Set[ast.Call]
self.six_calls = {} # type: Dict[Offset, ast.Call]
+ self.six_iter = {} # type: Dict[Offset, ast.Call]
self._previous_node = None # type: Optional[ast.AST]
self.six_raises = {} # type: Dict[Offset, ast.Call]
self.six_remove_decorators = set() # type: Set[Offset]
@@ -1384,6 +1385,18 @@ class FindPy3Plus(ast.NodeVisitor):
self.six_b.add(_ast_to_offset(node))
elif self._is_six(node.func, SIX_CALLS) and not _starargs(node):
self.six_calls[_ast_to_offset(node)] = node
+ elif (
+ isinstance(node.func, ast.Name) and
+ node.func.id == 'next' and
+ not _starargs(node) and
+ isinstance(node.args[0], ast.Call) and
+ self._is_six(
+ node.args[0].func,
+ ('iteritems', 'iterkeys', 'itervalues'),
+ ) and
+ not _starargs(node.args[0])
+ ):
+ self.six_iter[_ast_to_offset(node.args[0])] = node.args[0]
elif (
isinstance(self._previous_node, ast.Expr) and
self._is_six(node.func, SIX_RAISES) and
@@ -1799,6 +1812,7 @@ def _fix_py3_plus(contents_text): # type: (str) -> str
visitor.six_add_metaclass,
visitor.six_b,
visitor.six_calls,
+ visitor.six_iter,
visitor.six_raises,
visitor.six_remove_decorators,
visitor.six_simple,
@@ -1865,6 +1879,13 @@ def _fix_py3_plus(contents_text): # type: (str) -> str
):
func_args, end = _parse_call_args(tokens, j)
_replace_call(tokens, i, end, func_args, SIX_B_TMPL)
+ elif token.offset in visitor.six_iter:
+ j = _find_open_paren(tokens, i)
+ func_args, end = _parse_call_args(tokens, j)
+ call = visitor.six_iter[token.offset]
+ assert isinstance(call.func, (ast.Name, ast.Attribute))
+ template = 'iter({})'.format(_get_tmpl(SIX_CALLS, call.func))
+ _replace_call(tokens, i, end, func_args, template)
elif token.offset in visitor.six_calls:
j = _find_open_paren(tokens, i)
func_args, end = _parse_call_args(tokens, j)
| next(six.itervalues(d)) is rewritten to next(d.values())
```console
$ cat example.py
import six
print(next(six.itervalues({1: 2})))
$ python example.py
2
```
```console
$ pyupgrade --py36-plus example.py
Rewriting example.py
```
```console
$ cat example.py
import six
print(next({1: 2}.values()))
$ python example.py
Traceback (most recent call last):
File "example.py", line 4, in <module>
print(next({1: 2}.values()))
TypeError: 'dict_values' object is not an iterator
```
| asottile/pyupgrade | diff --git a/tests/six_test.py b/tests/six_test.py
index a139e7f..ae53419 100644
--- a/tests/six_test.py
+++ b/tests/six_test.py
@@ -337,6 +337,16 @@ def test_fix_six_noop(s):
id='add_metaclass, indented',
),
+ pytest.param(
+ 'print(six.itervalues({1:2}))\n',
+ 'print({1:2}.values())\n',
+ id='six.itervalues',
+ ),
+ pytest.param(
+ 'print(next(six.itervalues({1:2})))\n',
+ 'print(next(iter({1:2}.values())))\n',
+ id='six.itervalues inside next(...)',
+ ),
),
)
def test_fix_six(s, expected):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 1.24 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
-e git+https://github.com/asottile/pyupgrade.git@a5dc2d6c367df8bdac783c4d00cf6dc459f797cf#egg=pyupgrade
tokenize_rt==6.1.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: pyupgrade
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- tokenize-rt==6.1.0
prefix: /opt/conda/envs/pyupgrade
| [
"tests/six_test.py::test_fix_six[six.itervalues"
] | [] | [
"tests/six_test.py::test_fix_six_noop[x",
"tests/six_test.py::test_fix_six_noop[from",
"tests/six_test.py::test_fix_six_noop[@mydec\\nclass",
"tests/six_test.py::test_fix_six_noop[print(six.raise_from(exc,",
"tests/six_test.py::test_fix_six_noop[print(six.b(\"\\xa3\"))]",
"tests/six_test.py::test_fix_six_noop[print(six.b(",
"tests/six_test.py::test_fix_six_noop[class",
"tests/six_test.py::test_fix_six_noop[six.reraise(*err)]",
"tests/six_test.py::test_fix_six_noop[six.b(*a)]",
"tests/six_test.py::test_fix_six_noop[six.u(*a)]",
"tests/six_test.py::test_fix_six_noop[@six.add_metaclass(*a)\\nclass",
"tests/six_test.py::test_fix_six_noop[(\\n",
"tests/six_test.py::test_fix_six[isinstance(s,",
"tests/six_test.py::test_fix_six[weird",
"tests/six_test.py::test_fix_six[issubclass(tp,",
"tests/six_test.py::test_fix_six[STRING_TYPES",
"tests/six_test.py::test_fix_six[from",
"tests/six_test.py::test_fix_six[six.b(\"123\")-b\"123\"]",
"tests/six_test.py::test_fix_six[six.b(r\"123\")-br\"123\"]",
"tests/six_test.py::test_fix_six[six.b(\"\\\\x12\\\\xef\")-b\"\\\\x12\\\\xef\"]",
"tests/six_test.py::test_fix_six[six.byte2int(b\"f\")-b\"f\"[0]]",
"tests/six_test.py::test_fix_six[@six.python_2_unicode_compatible\\nclass",
"tests/six_test.py::test_fix_six[@six.python_2_unicode_compatible\\n@other_decorator\\nclass",
"tests/six_test.py::test_fix_six[six.get_unbound_method(meth)\\n-meth\\n]",
"tests/six_test.py::test_fix_six[six.indexbytes(bs,",
"tests/six_test.py::test_fix_six[six.assertCountEqual(\\n",
"tests/six_test.py::test_fix_six[six.raise_from(exc,",
"tests/six_test.py::test_fix_six[six.reraise(tp,",
"tests/six_test.py::test_fix_six[six.reraise(\\n",
"tests/six_test.py::test_fix_six[class",
"tests/six_test.py::test_fix_six[basic",
"tests/six_test.py::test_fix_six[add_metaclass,",
"tests/six_test.py::test_fix_six[six.itervalues]",
"tests/six_test.py::test_fix_base_classes[import",
"tests/six_test.py::test_fix_base_classes[from",
"tests/six_test.py::test_fix_base_classes[class",
"tests/six_test.py::test_fix_base_classes_py3only[class",
"tests/six_test.py::test_fix_base_classes_py3only[from"
] | [] | MIT License | 5,546 | 713 | [
"pyupgrade.py"
] |
|
DOV-Vlaanderen__pydov-211 | 325fad8fd06d5b2077366869c2f5a0017d59941b | 2019-10-08 09:49:54 | 325fad8fd06d5b2077366869c2f5a0017d59941b | diff --git a/pydov/util/query.py b/pydov/util/query.py
index f7cd9a2..dae02b5 100644
--- a/pydov/util/query.py
+++ b/pydov/util/query.py
@@ -7,7 +7,7 @@ from owslib.fes import (
)
-class PropertyInList(Or):
+class PropertyInList(object):
"""Filter expression to test whether a given property has one of the
values from a list.
@@ -33,19 +33,30 @@ class PropertyInList(Or):
Raises
------
ValueError
- If the given list does not contain at least two distinct items.
+ If the given list does not contain at least a single item.
"""
if not isinstance(lst, list) and not isinstance(lst, set):
raise ValueError('list should be of type "list" or "set"')
- if len(set(lst)) < 2:
- raise ValueError('list should contain at least two different '
- 'elements.')
+ if len(set(lst)) < 1:
+ raise ValueError('list should contain at least a single item')
+ elif len(set(lst)) == 1:
+ self.query = PropertyIsEqualTo(propertyname, set(lst).pop())
+ else:
+ self.query = Or(
+ [PropertyIsEqualTo(propertyname, i) for i in set(lst)])
- super(PropertyInList, self).__init__(
- [PropertyIsEqualTo(propertyname, i) for i in set(lst)]
- )
+ def toXML(self):
+ """Return the XML representation of the PropertyInList query.
+
+ Returns
+ -------
+ xml : etree.ElementTree
+ XML representation of the PropertyInList
+
+ """
+ return self.query.toXML()
class Join(PropertyInList):
@@ -83,9 +94,8 @@ class Join(PropertyInList):
If `using` is None and the `on` column is not present in the
dataframe.
- If the dataframe does not contain at least two different values
- in the `using` column. A Join is probably overkill here,
- use PropertyIsEqualTo instead.
+ If the dataframe does not contain at least a single non-null value
+ in the `using` column.
"""
if using is None:
@@ -98,8 +108,8 @@ class Join(PropertyInList):
value_list = list(dataframe[using].dropna().unique())
- if len(set(value_list)) < 2:
- raise ValueError("dataframe should contain at least two "
- "different values in column '{}'.".format(using))
+ if len(set(value_list)) < 1:
+ raise ValueError("dataframe should contain at least a single "
+ "value in column '{}'.".format(using))
super(Join, self).__init__(on, value_list)
| Support Join with a dataframe containing a single element too
<!-- You can ask questions about the DOV webservices or about the `pydov` package. If you have a question about the `pydov` Python package, please use following template. -->
* PyDOV version: master
* Python version: 3.6
* Operating System: Windows 10
### Description
When you use Join with a dataframe with only one (unique) element, you get an error. It would be nice if the Join would work with a dataframe containing only a single element too.
### What I Did
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-455ef57ac5bc> in <module>()
4 gwl = GrondwaterLocatieSearch()
5
----> 6 gwl.search(Join(df, 'pkey_grondwaterlocatie'))
c:\projecten\pydov\pydov_git\pydov\util\query.py in __init__(self, dataframe, on, using)
101 if len(set(value_list)) < 2:
102 raise ValueError("dataframe should contain at least two "
--> 103 "different values in column '{}'.".format(using))
104
105 super(Join, self).__init__(on, value_list)
ValueError: dataframe should contain at least two different values in column 'pkey_grondwaterlocatie'.
```
| DOV-Vlaanderen/pydov | diff --git a/tests/test_util_query.py b/tests/test_util_query.py
index c14c1ed..750acc3 100644
--- a/tests/test_util_query.py
+++ b/tests/test_util_query.py
@@ -1,5 +1,6 @@
"""Module grouping tests for the pydov.util.query module."""
import pandas as pd
+import numpy as np
import pytest
from pydov.util.query import (
@@ -67,26 +68,63 @@ class TestPropertyInList(object):
assert len(l_output) == 0
- def test_tooshort(self):
+ def test_list_single(self):
"""Test the PropertyInList expression with a list containing
a single item.
- Test whether a ValueError is raised.
+ Test whether the generated query is correct and does contain only a
+ single PropertyIsEqualTo.
"""
- with pytest.raises(ValueError):
- l = ['a']
- PropertyInList('methode', l)
+ l = ['a']
+
+ query = PropertyInList('methode', l)
+ xml = query.toXML()
+
+ assert xml.tag == '{http://www.opengis.net/ogc}PropertyIsEqualTo'
+
+ propertyname = xml.find('./{http://www.opengis.net/ogc}PropertyName')
+ assert propertyname.text == 'methode'
- def test_tooshort_duplicate(self):
+ literal = xml.find('./{http://www.opengis.net/ogc}Literal')
+ assert literal.text in l
+
+ l.remove(literal.text)
+ assert len(l) == 0
+
+ def test_list_single_duplicate(self):
"""Test the PropertyInList expression with a list containing
- a two identical items.
+ a single duplicated item.
+
+ Test whether the generated query is correct and does contain only a
+ single PropertyIsEqualTo.
+
+ """
+ l = ['a', 'a']
+ l_output = ['a']
+
+ query = PropertyInList('methode', l)
+ xml = query.toXML()
+
+ assert xml.tag == '{http://www.opengis.net/ogc}PropertyIsEqualTo'
+
+ propertyname = xml.find('./{http://www.opengis.net/ogc}PropertyName')
+ assert propertyname.text == 'methode'
+
+ literal = xml.find('./{http://www.opengis.net/ogc}Literal')
+ assert literal.text in l_output
+
+ l_output.remove(literal.text)
+ assert len(l_output) == 0
+
+ def test_emptylist(self):
+ """Test the PropertyInList expression with an empty list.
Test whether a ValueError is raised.
"""
with pytest.raises(ValueError):
- l = ['a', 'a']
+ l = []
PropertyInList('methode', l)
def test_nolist(self):
@@ -194,38 +232,77 @@ class TestJoin(object):
Join(df, 'pkey_sondering')
- def test_tooshort(self):
+ def test_single(self):
"""Test the Join expression with a dataframe containing a single row.
- Test whether a ValueError is raised.
+ Test whether the generated query is correct and does contain only a
+ single PropertyIsEqualTo.
"""
- with pytest.raises(ValueError):
- l = ['https://www.dov.vlaanderen.be/data/boring/1986-068853']
+ l = ['https://www.dov.vlaanderen.be/data/boring/1986-068853']
- df = pd.DataFrame({
- 'pkey_boring': pd.Series(l),
- 'diepte_tot_m': pd.Series([10])
- })
+ df = pd.DataFrame({
+ 'pkey_boring': pd.Series(l),
+ 'diepte_tot_m': pd.Series([10])
+ })
- Join(df, 'pkey_boring')
+ query = Join(df, 'pkey_boring')
+ xml = query.toXML()
+
+ assert xml.tag == '{http://www.opengis.net/ogc}PropertyIsEqualTo'
+
+ propertyname = xml.find('./{http://www.opengis.net/ogc}PropertyName')
+ assert propertyname.text == 'pkey_boring'
- def test_tooshort_duplicate(self):
+ literal = xml.find('./{http://www.opengis.net/ogc}Literal')
+ assert literal.text in l
+
+ l.remove(literal.text)
+ assert len(l) == 0
+
+ def test_single_duplicate(self):
"""Test the Join expression with a dataframe containing two
identical keys.
- Test whether a ValueError is raised.
+ Test whether the generated query is correct and does contain only a
+ single PropertyIsEqualTo.
"""
- with pytest.raises(ValueError):
- l = ['https://www.dov.vlaanderen.be/data/boring/1986-068853',
- 'https://www.dov.vlaanderen.be/data/boring/1986-068853']
+ l = ['https://www.dov.vlaanderen.be/data/boring/1986-068853',
+ 'https://www.dov.vlaanderen.be/data/boring/1986-068853']
+ l_output = ['https://www.dov.vlaanderen.be/data/boring/1986-068853']
- df = pd.DataFrame({
- 'pkey_boring': pd.Series(l),
- 'diepte_tot_m': pd.Series([10, 20])
- })
+ df = pd.DataFrame({
+ 'pkey_boring': pd.Series(l),
+ 'diepte_tot_m': pd.Series([10, 20])
+ })
+
+ query = Join(df, 'pkey_boring')
+ xml = query.toXML()
+
+ assert xml.tag == '{http://www.opengis.net/ogc}PropertyIsEqualTo'
+
+ propertyname = xml.find('./{http://www.opengis.net/ogc}PropertyName')
+ assert propertyname.text == 'pkey_boring'
+
+ literal = xml.find('./{http://www.opengis.net/ogc}Literal')
+ assert literal.text in l_output
+
+ l_output.remove(literal.text)
+ assert len(l_output) == 0
+
+ def test_empty(self):
+ """Test the Join expression with an empty dataframe.
+
+ Test whether a ValueError is raised
+
+ """
+ df = pd.DataFrame({
+ 'pkey_boring': [np.nan, np.nan],
+ 'diepte_tot_m': pd.Series([10, 20])
+ })
+ with pytest.raises(ValueError):
Join(df, 'pkey_boring')
def test_on(self):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
dataclasses==0.8
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
lxml==5.3.1
numpy==1.19.5
OWSLib==0.31.0
packaging==21.3
pandas==1.1.5
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/DOV-Vlaanderen/pydov.git@325fad8fd06d5b2077366869c2f5a0017d59941b#egg=pydov
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.1
requests==2.27.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: pydov
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- dataclasses==0.8
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- lxml==5.3.1
- numpy==1.19.5
- owslib==0.31.0
- packaging==21.3
- pandas==1.1.5
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- requests==2.27.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/pydov
| [
"tests/test_util_query.py::TestPropertyInList::test_list_single",
"tests/test_util_query.py::TestPropertyInList::test_list_single_duplicate",
"tests/test_util_query.py::TestJoin::test_single",
"tests/test_util_query.py::TestJoin::test_single_duplicate"
] | [] | [
"tests/test_util_query.py::TestPropertyInList::test",
"tests/test_util_query.py::TestPropertyInList::test_duplicate",
"tests/test_util_query.py::TestPropertyInList::test_emptylist",
"tests/test_util_query.py::TestPropertyInList::test_nolist",
"tests/test_util_query.py::TestJoin::test",
"tests/test_util_query.py::TestJoin::test_duplicate",
"tests/test_util_query.py::TestJoin::test_wrongcolumn",
"tests/test_util_query.py::TestJoin::test_empty",
"tests/test_util_query.py::TestJoin::test_on",
"tests/test_util_query.py::TestJoin::test_using"
] | [] | MIT License | 5,553 | 655 | [
"pydov/util/query.py"
] |
|
scoutapp__scout_apm_python-269 | 5b5dec7714dad3a723cbc1b8ade970d08426944a | 2019-10-09 17:47:13 | e879a15799a15850061b1f41e8e878b4148b7305 | diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py
index 200968d..5d77b7b 100644
--- a/src/scout_apm/core/config.py
+++ b/src/scout_apm/core/config.py
@@ -233,6 +233,7 @@ class ScoutConfigDefaults(object):
"name": "",
"revision_sha": self._git_revision_sha(),
"scm_subdirectory": "",
+ "uri_reporting": "filtered_params",
}
def _git_revision_sha(self):
diff --git a/src/scout_apm/core/web_requests.py b/src/scout_apm/core/web_requests.py
index 4e2b89e..52ff42d 100644
--- a/src/scout_apm/core/web_requests.py
+++ b/src/scout_apm/core/web_requests.py
@@ -1,5 +1,6 @@
# coding=utf-8
from scout_apm.compat import urlencode
+from scout_apm.core.context import AgentContext
# Originally derived from:
# 1. Rails:
@@ -35,6 +36,8 @@ FILTER_PARAMETERS = frozenset(
def create_filtered_path(path, query_params):
+ if AgentContext.instance.config.value("uri_reporting") == "path":
+ return path
filtered_params = sorted(
(
(k, "[FILTERED]" if k.lower() in FILTER_PARAMETERS else v)
| Config to capture query string, not just path
When a request comes in, we capture `request.path`, which doesn't appear to capture any query string.
We should match the Ruby agent:
* Default to capturing the entire path and query string
* Config to not capture query string https://docs.scoutapm.com/#uri_reporting | scoutapp/scout_apm_python | diff --git a/tests/unit/core/test_web_requests.py b/tests/unit/core/test_web_requests.py
index af36d4e..ffe9f0c 100644
--- a/tests/unit/core/test_web_requests.py
+++ b/tests/unit/core/test_web_requests.py
@@ -3,9 +3,17 @@ from __future__ import absolute_import, division, print_function, unicode_litera
import pytest
+from scout_apm.core.context import AgentContext
from scout_apm.core.web_requests import create_filtered_path
[email protected](autouse=True)
+def force_context():
+ # Since this function accesses the context, it needs to always be built
+ # TODO: remove when moving to a sensible singleton pattern
+ AgentContext.build()
+
+
@pytest.mark.parametrize(
"path, params, expected",
[
@@ -25,3 +33,14 @@ from scout_apm.core.web_requests import create_filtered_path
)
def test_create_filtered_path(path, params, expected):
assert create_filtered_path(path, params) == expected
+
+
[email protected]("path, params", [("/", []), ("/", [("foo", "ignored")])])
+def test_create_filtered_path_path(path, params):
+ # If config filtered_params is set to "path", expect we always get the path
+ # back
+ AgentContext.instance.config.set(uri_reporting="path")
+ try:
+ assert create_filtered_path(path, params) == path
+ finally:
+ AgentContext.instance.config.reset_all()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 2
} | 2.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"black",
"flake8",
"isort",
"tox",
"bottle",
"celery",
"Django",
"elasticsearch",
"flask",
"flask-sqlalchemy",
"jinja2",
"mock",
"pymongo",
"pyramid",
"pytest",
"pytest-travis-fold",
"pytest-cov",
"redis",
"six",
"sqlalchemy",
"starlette",
"urllib3",
"webtest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | amqp==5.3.1
anyio==3.7.1
asgiref==3.7.2
async-timeout==4.0.3
beautifulsoup4==4.13.3
billiard==3.6.4.0
black==23.3.0
bottle==0.13.2
cached-property==1.5.2
celery==5.2.7
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
click==8.1.8
click-didyoumean==0.3.1
click-plugins==1.1.1
click-repl==0.3.0
coverage==7.2.7
distlib==0.3.9
Django==3.2.25
dnspython==2.3.0
elastic-transport==8.13.1
elasticsearch==8.14.0
exceptiongroup==1.2.2
filelock==3.12.2
flake8==5.0.4
Flask==2.2.5
Flask-SQLAlchemy==3.0.5
greenlet==3.1.1
hupper==1.12.1
idna==3.10
importlib-metadata==4.2.0
iniconfig==2.0.0
isort==5.11.5
itsdangerous==2.1.2
Jinja2==3.1.6
kombu==5.2.4
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mypy-extensions==1.0.0
packaging==24.0
PasteDeploy==3.1.0
pathspec==0.11.2
plaster==1.1.2
plaster-pastedeploy==1.0.1
platformdirs==2.6.2
pluggy==1.2.0
prompt_toolkit==3.0.48
psutil==7.0.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
pymongo==4.7.3
pyramid==2.0.2
pytest==7.4.4
pytest-cov==4.1.0
pytest-travis-fold==1.3.0
pytz==2025.2
redis==5.0.8
requests==2.31.0
-e git+https://github.com/scoutapp/scout_apm_python.git@5b5dec7714dad3a723cbc1b8ade970d08426944a#egg=scout_apm
six==1.17.0
sniffio==1.3.1
soupsieve==2.4.1
SQLAlchemy==2.0.40
sqlparse==0.4.4
starlette==0.29.0
tomli==2.0.1
tox==3.28.0
translationstring==1.4
typed-ast==1.5.5
typing_extensions==4.7.1
urllib3==2.0.7
venusian==3.1.1
vine==5.1.0
virtualenv==20.16.2
waitress==2.1.2
wcwidth==0.2.13
WebOb==1.8.9
WebTest==3.0.1
Werkzeug==2.2.3
zipp==3.15.0
zope.deprecation==5.0
zope.interface==6.4.post2
| name: scout_apm_python
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- amqp==5.3.1
- anyio==3.7.1
- asgiref==3.7.2
- async-timeout==4.0.3
- beautifulsoup4==4.13.3
- billiard==3.6.4.0
- black==23.3.0
- bottle==0.13.2
- cached-property==1.5.2
- celery==5.2.7
- charset-normalizer==3.4.1
- click==8.1.8
- click-didyoumean==0.3.1
- click-plugins==1.1.1
- click-repl==0.3.0
- coverage==7.2.7
- distlib==0.3.9
- django==3.2.25
- dnspython==2.3.0
- elastic-transport==8.13.1
- elasticsearch==8.14.0
- exceptiongroup==1.2.2
- filelock==3.12.2
- flake8==5.0.4
- flask==2.2.5
- flask-sqlalchemy==3.0.5
- greenlet==3.1.1
- hupper==1.12.1
- idna==3.10
- importlib-metadata==4.2.0
- iniconfig==2.0.0
- isort==5.11.5
- itsdangerous==2.1.2
- jinja2==3.1.6
- kombu==5.2.4
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mypy-extensions==1.0.0
- packaging==24.0
- pastedeploy==3.1.0
- pathspec==0.11.2
- plaster==1.1.2
- plaster-pastedeploy==1.0.1
- platformdirs==2.6.2
- pluggy==1.2.0
- prompt-toolkit==3.0.48
- psutil==7.0.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pymongo==4.7.3
- pyramid==2.0.2
- pytest==7.4.4
- pytest-cov==4.1.0
- pytest-travis-fold==1.3.0
- pytz==2025.2
- redis==5.0.8
- requests==2.31.0
- six==1.17.0
- sniffio==1.3.1
- soupsieve==2.4.1
- sqlalchemy==2.0.40
- sqlparse==0.4.4
- starlette==0.29.0
- tomli==2.0.1
- tox==3.28.0
- translationstring==1.4
- typed-ast==1.5.5
- typing-extensions==4.7.1
- urllib3==2.0.7
- venusian==3.1.1
- vine==5.1.0
- virtualenv==20.16.2
- waitress==2.1.2
- wcwidth==0.2.13
- webob==1.8.9
- webtest==3.0.1
- werkzeug==2.2.3
- zipp==3.15.0
- zope-deprecation==5.0
- zope-interface==6.4.post2
prefix: /opt/conda/envs/scout_apm_python
| [
"tests/unit/core/test_web_requests.py::test_create_filtered_path_path[/-params1]"
] | [] | [
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params0-/]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/foo/-params1-/foo/]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params2-/?bar=1]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params3-/?bar=1&baz=2]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params4-/?bar=1&bar=2]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params5-/?password=%5BFILTERED%5D]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params6-/?PASSWORD=%5BFILTERED%5D]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path[/-params7-/?password=%5BFILTERED%5D&password=%5BFILTERED%5D]",
"tests/unit/core/test_web_requests.py::test_create_filtered_path_path[/-params0]"
] | [] | MIT License | 5,561 | 328 | [
"src/scout_apm/core/config.py",
"src/scout_apm/core/web_requests.py"
] |
|
mlrun__mlrun-27 | bb4b1fd00e08134004e8723ec32af9f52c06eb63 | 2019-10-10 13:52:43 | 931b0054f780784005f1cde9626da2cac489b5ea | diff --git a/mlrun/db/httpd.py b/mlrun/db/httpd.py
index e9c533546..1ed01fcd5 100644
--- a/mlrun/db/httpd.py
+++ b/mlrun/db/httpd.py
@@ -20,7 +20,6 @@ from http import HTTPStatus
from flask import Flask, jsonify, request
-from mlrun.artifacts import Artifact
from mlrun.db import RunDBError
from mlrun.db.filedb import FileRunDB
from mlrun.utils import logger
@@ -218,15 +217,10 @@ def store_artifact(project, uid):
_file_db.store_artifact(key, data, uid, tag, project)
return jsonify(ok=True)
-# curl http://localhost:8080/artifact/p1&key=k&tag=t
[email protected]('/artifact/<project>/<uid>', methods=['GET'])
+# curl http://localhost:8080/artifact/p1/tag/key
[email protected]('/artifact/<project>/<tag>/<path:key>', methods=['GET'])
@catch_err
-def read_artifact(project, uid):
- key = request.args.get('key')
- if not key:
- return json_error(HTTPStatus.BAD_REQUEST, reason='missing data')
-
- tag = request.args.get('tag', '')
+def read_artifact(project, tag, key):
data = _file_db.read_artifact(key, tag, project)
return data
diff --git a/mlrun/db/httpdb.py b/mlrun/db/httpdb.py
index cf516fffe..6080a32cd 100644
--- a/mlrun/db/httpdb.py
+++ b/mlrun/db/httpdb.py
@@ -93,13 +93,13 @@ class HTTPRunDB(RunDBInterface):
path = self._path_of('run', project, uid)
error = f'store run {project}/{uid}'
params = {'commit': bool2str(commit)}
- body = dict_to_json(struct)
+ body = _as_json(struct)
self._api_call('POST', path, error, params, body=body)
def update_run(self, updates: dict, uid, project=''):
path = self._path_of('run', project, uid)
error = f'update run {project}/{uid}'
- body = dict_to_json(updates)
+ body = _as_json(updates)
self._api_call('PATCH', path, error, body=body)
def read_run(self, uid, project=''):
@@ -149,17 +149,17 @@ class HTTPRunDB(RunDBInterface):
}
error = f'store artifact {project}/{uid}'
+
+ body = _as_json(artifact)
self._api_call(
- 'POST', path, error, params=params, body=dict_to_json(artifact))
+ 'POST', path, error, params=params, body=body)
def read_artifact(self, key, tag='', project=''):
- path = self._path_of('artifact', project, key) # TODO: uid?
- params = {
- 'key': key,
- 'tag': tag,
- }
+ project = project or default_project
+ tag = tag or 'latest'
+ path = self._path_of('artifact', project, tag, key)
error = f'read artifact {project}/{key}'
- resp = self._api_call('GET', path, error, params=params)
+ resp = self._api_call('GET', path, error)
return resp.content
def del_artifact(self, key, tag='', project=''):
@@ -197,3 +197,10 @@ class HTTPRunDB(RunDBInterface):
}
error = 'del artifacts'
self._api_call('DELETE', 'artifacts', error, params=params)
+
+
+def _as_json(obj):
+ fn = getattr(obj, 'to_json', None)
+ if fn:
+ return fn()
+ return dict_to_json(obj)
| httpd/db artifacts broken
@tebeka the artifacts get/store etc is broken
a. you must use the `artifact.to_json()` vs dumps like in filedb, since `to_json()` accounts for model details, also `body` is not stored in the DB
b. artifacts object path should be `/artifact/<project>/<uid/tag>/<key>' (key not as arg)
, (uid/tag is like a git tree and can also list all keys in a tree)
https://github.com/mlrun/mlrun/blob/0935dda40fb3a5db551a4b4e269adbac4813a792/mlrun/db/httpdb.py#L142 | mlrun/mlrun | diff --git a/tests/test_httpdb.py b/tests/test_httpdb.py
index e432064e9..88b04ccd7 100644
--- a/tests/test_httpdb.py
+++ b/tests/test_httpdb.py
@@ -157,7 +157,7 @@ def test_artifact(create_server):
db = server.conn
prj, uid, key, body = 'p7', 'u199', 'k800', 'cucumber'
- artifact = Artifact(key, body).to_dict()
+ artifact = Artifact(key, body)
db.store_artifact(key, artifact, uid, project=prj)
# TODO: Need a run file
@@ -168,7 +168,7 @@ def test_artifacts(create_server):
server: Server = create_server()
db = server.conn
prj, uid, key, body = 'p9', 'u19', 'k802', 'tomato'
- artifact = Artifact(key, body).to_dict()
+ artifact = Artifact(key, body)
db.store_artifact(key, artifact, uid, project=prj)
artifacts = db.list_artifacts(project=prj)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 3
},
"num_modified_files": 2
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiohappyeyeballs==2.6.1
aiohttp==3.11.14
aiosignal==1.3.2
anyio==4.9.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
async-timeout==5.0.1
attrs==25.3.0
beautifulsoup4==4.13.3
bleach==6.2.0
blinker==1.9.0
boto3==1.37.23
botocore==1.37.23
cachetools==5.5.2
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
click==8.1.8
comm==0.2.2
debugpy==1.8.13
decorator==5.2.1
defusedxml==0.7.1
durationpy==0.9
exceptiongroup==1.2.2
executing==2.2.0
fastjsonschema==2.21.1
Flask==3.1.0
fqdn==1.5.1
frozenlist==1.5.0
gitdb==4.0.12
GitPython==3.1.44
google-auth==2.38.0
idna==3.10
importlib_metadata==7.2.1
iniconfig==2.1.0
ipykernel==6.29.5
ipython==8.18.1
isoduration==20.11.0
itsdangerous==2.2.0
jedi==0.19.2
Jinja2==3.1.6
jmespath==1.0.1
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter-events==0.12.0
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab_pygments==0.3.0
kubernetes==32.0.1
MarkupSafe==3.0.2
matplotlib-inline==0.1.7
mistune==3.1.3
-e git+https://github.com/mlrun/mlrun.git@bb4b1fd00e08134004e8723ec32af9f52c06eb63#egg=mlrun
multidict==6.2.0
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
nuclio-jupyter==0.11.1
nuclio_sdk==0.5.15
numpy==2.0.2
oauthlib==3.2.2
overrides==7.7.0
packaging==24.2
pandas==2.2.3
pandocfilters==1.5.1
parso==0.8.4
pexpect==4.9.0
platformdirs==4.3.7
pluggy==1.5.0
prometheus_client==0.21.1
prompt_toolkit==3.0.50
propcache==0.3.1
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pyasn1==0.6.1
pyasn1_modules==0.4.2
pycparser==2.22
Pygments==2.19.1
pytest==8.3.5
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
pytz==2025.2
PyYAML==6.0.2
pyzmq==26.3.0
referencing==0.36.2
requests==2.32.3
requests-oauthlib==2.0.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.24.0
rsa==4.9
s3transfer==0.11.4
Send2Trash==1.8.3
six==1.17.0
smmap==5.0.2
sniffio==1.3.1
soupsieve==2.6
stack-data==0.6.3
tabulate==0.9.0
terminado==0.18.1
tinycss2==1.4.0
tomli==2.2.1
tornado==6.4.2
traitlets==5.14.3
types-python-dateutil==2.9.0.20241206
typing_extensions==4.13.0
tzdata==2025.2
uri-template==1.3.0
urllib3==1.26.20
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.1.3
yarl==1.18.3
zipp==3.21.0
| name: mlrun
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiohappyeyeballs==2.6.1
- aiohttp==3.11.14
- aiosignal==1.3.2
- anyio==4.9.0
- argon2-cffi==23.1.0
- argon2-cffi-bindings==21.2.0
- arrow==1.3.0
- asttokens==3.0.0
- async-timeout==5.0.1
- attrs==25.3.0
- beautifulsoup4==4.13.3
- bleach==6.2.0
- blinker==1.9.0
- boto3==1.37.23
- botocore==1.37.23
- cachetools==5.5.2
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- click==8.1.8
- comm==0.2.2
- debugpy==1.8.13
- decorator==5.2.1
- defusedxml==0.7.1
- durationpy==0.9
- exceptiongroup==1.2.2
- executing==2.2.0
- fastjsonschema==2.21.1
- flask==3.1.0
- fqdn==1.5.1
- frozenlist==1.5.0
- gitdb==4.0.12
- gitpython==3.1.44
- google-auth==2.38.0
- idna==3.10
- importlib-metadata==7.2.1
- iniconfig==2.1.0
- ipykernel==6.29.5
- ipython==8.18.1
- isoduration==20.11.0
- itsdangerous==2.2.0
- jedi==0.19.2
- jinja2==3.1.6
- jmespath==1.0.1
- jsonpointer==3.0.0
- jsonschema==4.23.0
- jsonschema-specifications==2024.10.1
- jupyter-client==8.6.3
- jupyter-core==5.7.2
- jupyter-events==0.12.0
- jupyter-server==2.15.0
- jupyter-server-terminals==0.5.3
- jupyterlab-pygments==0.3.0
- kubernetes==32.0.1
- markupsafe==3.0.2
- matplotlib-inline==0.1.7
- mistune==3.1.3
- multidict==6.2.0
- nbclient==0.10.2
- nbconvert==7.16.6
- nbformat==5.10.4
- nest-asyncio==1.6.0
- nuclio-jupyter==0.11.1
- nuclio-sdk==0.5.15
- numpy==2.0.2
- oauthlib==3.2.2
- overrides==7.7.0
- packaging==24.2
- pandas==2.2.3
- pandocfilters==1.5.1
- parso==0.8.4
- pexpect==4.9.0
- platformdirs==4.3.7
- pluggy==1.5.0
- prometheus-client==0.21.1
- prompt-toolkit==3.0.50
- propcache==0.3.1
- psutil==7.0.0
- ptyprocess==0.7.0
- pure-eval==0.2.3
- pyasn1==0.6.1
- pyasn1-modules==0.4.2
- pycparser==2.22
- pygments==2.19.1
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- python-json-logger==3.3.0
- pytz==2025.2
- pyyaml==6.0.2
- pyzmq==26.3.0
- referencing==0.36.2
- requests==2.32.3
- requests-oauthlib==2.0.0
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
- rpds-py==0.24.0
- rsa==4.9
- s3transfer==0.11.4
- send2trash==1.8.3
- six==1.17.0
- smmap==5.0.2
- sniffio==1.3.1
- soupsieve==2.6
- stack-data==0.6.3
- tabulate==0.9.0
- terminado==0.18.1
- tinycss2==1.4.0
- tomli==2.2.1
- tornado==6.4.2
- traitlets==5.14.3
- types-python-dateutil==2.9.0.20241206
- typing-extensions==4.13.0
- tzdata==2025.2
- uri-template==1.3.0
- urllib3==1.26.20
- wcwidth==0.2.13
- webcolors==24.11.1
- webencodings==0.5.1
- websocket-client==1.8.0
- werkzeug==3.1.3
- yarl==1.18.3
- zipp==3.21.0
prefix: /opt/conda/envs/mlrun
| [
"tests/test_httpdb.py::test_artifact",
"tests/test_httpdb.py::test_artifacts"
] | [] | [
"tests/test_httpdb.py::test_log",
"tests/test_httpdb.py::test_run",
"tests/test_httpdb.py::test_runs",
"tests/test_httpdb.py::test_basic_auth",
"tests/test_httpdb.py::test_bearer_auth"
] | [] | Apache License 2.0 | 5,567 | 912 | [
"mlrun/db/httpd.py",
"mlrun/db/httpdb.py"
] |
|
Benardi__touvlo-34 | 0e1ee71ebec9a09370f928dd96d0ae11f3bba404 | 2019-10-13 17:08:20 | 0e1ee71ebec9a09370f928dd96d0ae11f3bba404 | diff --git a/touvlo/utils.py b/touvlo/utils.py
index fdf47db..ca8531c 100644
--- a/touvlo/utils.py
+++ b/touvlo/utils.py
@@ -35,7 +35,7 @@ def g_grad(x):
def gradient_descent(X, y, grad, initial_theta,
- alpha, num_iters, _lambda=None):
+ alpha, num_iters, **kwargs):
"""This function performs parameter optimization via gradient descent.
:param X: Features' dataset plus bias column.
@@ -56,22 +56,12 @@ def gradient_descent(X, y, grad, initial_theta,
:param num_iters: Number of times the optimization will be performed.
:type num_iters: int
- :param _lambda: Weight of the penalty term.
- :type _lambda: float
-
:returns: Optimized model parameters.
:rtype: numpy.array
"""
- if _lambda is not None:
- theta = copy(initial_theta)
-
- for _ in range(num_iters):
- theta = theta - alpha * grad(theta, X, y, _lambda)
-
- else:
- theta = copy(initial_theta)
- for _ in range(num_iters):
- theta = theta - alpha * grad(theta, X, y)
+ theta = copy(initial_theta)
+ for _ in range(num_iters):
+ theta = theta - alpha * grad(X, y, theta, **kwargs)
return theta
| Generalize the function utils.gradient_descent
The function `gradient_descent` should only be aware of the parameters `X, y, grad, initial_theta,alpha, num_iters`. Any further parameter should be fed to the `grad` function with no impact in the signature of the function `gradient_descent`.
TL; DR: Edit the function `gradient_descent` in the module `utils` to allow a gradient function (grad) of any signature. | Benardi/touvlo | diff --git a/tests/test_utils.py b/tests/test_utils.py
index d695ead..5adeb9b 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -4,7 +4,7 @@ import pytest
from numpy import array, cos, sin, exp
from numpy.testing import assert_allclose
-from touvlo.utils import numerical_grad, g_grad
+from touvlo.utils import numerical_grad, g_grad, gradient_descent
class TestLogisticRegression:
@@ -82,3 +82,45 @@ class TestLogisticRegression:
assert_allclose(g_grad(z),
[0.196612, 0.235004, 0.25, 0.235004, 0.196612],
rtol=0, atol=0.001, equal_nan=False)
+
+ def test_gradient_descent1(self, err):
+ def grad(X, y, theta):
+ m = len(y)
+ grad = (1 / m) * (X.T).dot(X.dot(theta) - y)
+ return grad
+
+ X = array([[0, 1, 2], [-1, 5, 3], [2, 0, 1]])
+ initial_theta = array([[0], [0], [0]])
+ y = array([[0.3], [1.2], [0.5]])
+ num_iters = 3
+ alpha = 1
+
+ assert_allclose(array([[-46.415], [276.248], [192.204]]),
+ gradient_descent(X, y, grad, initial_theta,
+ alpha, num_iters),
+ rtol=0, atol=0.001, equal_nan=False)
+
+ def test_gradient_descent2(self, err):
+ def grad(X, y, theta, schleem, plumbus, wubba, lubba):
+ m = len(y)
+ grad = (schleem / (m * wubba))
+ grad = grad * (X.T).dot(X.dot(theta) - y)
+ grad = grad + plumbus / (2 * lubba)
+ return grad
+
+ X = array([[0, 1, 2], [-1, 5, 3], [2, 0, 1]])
+ initial_theta = array([[0], [0], [0]])
+ y = array([[0.3], [1.2], [0.5]])
+ num_iters = 5
+ plumbus = 0.8
+ schleem = 0.6
+ wubba = 3.4
+ lubba = 2.7
+ alpha = 0.01
+
+ assert_allclose(array([[-0.0078777], [0.0106179], [0.0060865]]),
+ gradient_descent(X, y, grad, initial_theta,
+ alpha, num_iters, lubba=lubba,
+ schleem=schleem, wubba=wubba,
+ plumbus=plumbus),
+ rtol=0, atol=0.001, equal_nan=False)
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt",
"dev-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | build==1.2.2.post1
check-manifest==0.50
cmake==4.0.0
exceptiongroup==1.2.2
flake8==7.2.0
flake8-import-order==0.18.2
importlib_metadata==8.6.1
iniconfig==2.1.0
mccabe==0.7.0
numpy==2.0.2
packaging==24.2
pbr==6.1.1
pluggy==1.5.0
pycodestyle==2.13.0
pyflakes==3.3.2
pyproject_hooks==1.2.0
pytest==8.3.5
tomli==2.2.1
-e git+https://github.com/Benardi/touvlo.git@0e1ee71ebec9a09370f928dd96d0ae11f3bba404#egg=touvlo
zipp==3.21.0
| name: touvlo
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- build==1.2.2.post1
- check-manifest==0.50
- cmake==4.0.0
- exceptiongroup==1.2.2
- flake8==7.2.0
- flake8-import-order==0.18.2
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- mccabe==0.7.0
- numpy==2.0.2
- packaging==24.2
- pbr==6.1.1
- pluggy==1.5.0
- pycodestyle==2.13.0
- pyflakes==3.3.2
- pyproject-hooks==1.2.0
- pytest==8.3.5
- tomli==2.2.1
- zipp==3.21.0
prefix: /opt/conda/envs/touvlo
| [
"tests/test_utils.py::TestLogisticRegression::test_gradient_descent1",
"tests/test_utils.py::TestLogisticRegression::test_gradient_descent2"
] | [] | [
"tests/test_utils.py::TestLogisticRegression::test_numeric_grad_1",
"tests/test_utils.py::TestLogisticRegression::test_numeric_grad_2",
"tests/test_utils.py::TestLogisticRegression::test_numeric_grad_3",
"tests/test_utils.py::TestLogisticRegression::test_numeric_grad_4",
"tests/test_utils.py::TestLogisticRegression::test_numeric_grad_5",
"tests/test_utils.py::TestLogisticRegression::test_sigmoid_gradient"
] | [] | MIT License | 5,578 | 342 | [
"touvlo/utils.py"
] |
|
spulec__freezegun-311 | f70b77c0867fe3bb78cb58570ece031d7fdd34e7 | 2019-10-13 22:37:53 | 181f7ac7f909e561e26f5b293d2d40e82eb99f7a | diff --git a/freezegun/api.py b/freezegun/api.py
index bc61270..0339059 100644
--- a/freezegun/api.py
+++ b/freezegun/api.py
@@ -26,9 +26,10 @@ real_time = time.time
real_localtime = time.localtime
real_gmtime = time.gmtime
real_strftime = time.strftime
+real_timegm = calendar.timegm
real_date = datetime.date
real_datetime = datetime.datetime
-real_date_objects = [real_time, real_localtime, real_gmtime, real_strftime, real_date, real_datetime]
+real_date_objects = [real_time, real_localtime, real_gmtime, real_strftime, real_timegm, real_date, real_datetime]
if _TIME_NS_PRESENT:
real_time_ns = time.time_ns
@@ -179,7 +180,7 @@ def fake_time():
if _should_use_real_time():
return real_time()
current_time = get_current_time()
- return calendar.timegm(current_time.timetuple()) + current_time.microsecond / 1000000.0
+ return real_timegm(current_time.timetuple()) + current_time.microsecond / 1000000.0
if _TIME_NS_PRESENT:
def fake_time_ns():
@@ -215,6 +216,14 @@ def fake_strftime(format, time_to_format=None):
else:
return real_strftime(format, time_to_format)
+
+def fake_timegm(struct_time):
+ if _should_use_real_time():
+ return real_timegm(struct_time)
+ else:
+ return real_timegm(get_current_time().timetuple())
+
+
if real_clock is not None:
def fake_clock():
if _should_use_real_time():
@@ -586,6 +595,8 @@ class _freeze_time(object):
return freeze_factory
# Change the modules
+ calendar.timegm = fake_timegm
+
datetime.datetime = FakeDatetime
datetime.date = FakeDate
@@ -609,6 +620,7 @@ class _freeze_time(object):
('real_localtime', real_localtime, fake_localtime),
('real_strftime', real_strftime, fake_strftime),
('real_time', real_time, fake_time),
+ ('real_timegm', real_timegm, fake_timegm),
]
if _TIME_NS_PRESENT:
@@ -655,6 +667,7 @@ class _freeze_time(object):
tz_offsets.pop()
if not freeze_factories:
+ calendar.timegm = real_timegm
datetime.datetime = real_datetime
datetime.date = real_date
copyreg.dispatch_table.pop(real_datetime)
| Support for calendar.timegm()
It would be awesome to support [`calendar.timegm()`](https://docs.python.org/3.7/library/calendar.html#calendar.timegm).
It's actually the inverse of `time.gmtime()` which is currently supported and frozen properly. | spulec/freezegun | diff --git a/tests/another_module.py b/tests/another_module.py
index 637c889..936bd7d 100644
--- a/tests/another_module.py
+++ b/tests/another_module.py
@@ -1,3 +1,4 @@
+from calendar import timegm
from datetime import date, datetime
from time import time, localtime, gmtime, strftime
@@ -8,6 +9,7 @@ from freezegun.api import (
fake_localtime,
fake_gmtime,
fake_strftime,
+ fake_timegm,
)
@@ -37,6 +39,10 @@ def get_strftime():
return strftime
+def get_timegm():
+ return timegm
+
+
# Fakes
def get_fake_datetime():
@@ -61,3 +67,7 @@ def get_fake_gmtime():
def get_fake_strftime():
return fake_strftime
+
+
+def get_fake_timegm():
+ return fake_timegm
diff --git a/tests/test_class_import.py b/tests/test_class_import.py
index 1c98215..36a6178 100644
--- a/tests/test_class_import.py
+++ b/tests/test_class_import.py
@@ -1,3 +1,5 @@
+import calendar
+import datetime
import time
import sys
from .fake_module import (
@@ -17,8 +19,8 @@ from freezegun.api import (
fake_localtime,
fake_gmtime,
fake_strftime,
+ fake_timegm,
)
-import datetime
@freeze_time("2012-01-14")
@@ -148,6 +150,8 @@ def test_import_after_start():
assert another_module.get_gmtime() is fake_gmtime
assert another_module.get_strftime() is time.strftime
assert another_module.get_strftime() is fake_strftime
+ assert another_module.get_timegm() is calendar.timegm
+ assert another_module.get_timegm() is fake_timegm
# Fakes
assert another_module.get_fake_datetime() is FakeDatetime
@@ -156,6 +160,7 @@ def test_import_after_start():
assert another_module.get_fake_localtime() is fake_localtime
assert another_module.get_fake_gmtime() is fake_gmtime
assert another_module.get_fake_strftime() is fake_strftime
+ assert another_module.get_fake_timegm() is fake_timegm
# Reals
assert another_module.get_datetime() is datetime.datetime
@@ -170,6 +175,8 @@ def test_import_after_start():
assert not another_module.get_gmtime() is fake_gmtime
assert another_module.get_strftime() is time.strftime
assert not another_module.get_strftime() is fake_strftime
+ assert another_module.get_timegm() is calendar.timegm
+ assert not another_module.get_timegm() is fake_timegm
# Fakes
assert another_module.get_fake_datetime() is FakeDatetime
@@ -178,6 +185,7 @@ def test_import_after_start():
assert another_module.get_fake_localtime() is fake_localtime
assert another_module.get_fake_gmtime() is fake_gmtime
assert another_module.get_fake_strftime() is fake_strftime
+ assert another_module.get_fake_timegm() is fake_timegm
def test_none_as_initial():
diff --git a/tests/test_datetimes.py b/tests/test_datetimes.py
index edd4a27..12da8b9 100644
--- a/tests/test_datetimes.py
+++ b/tests/test_datetimes.py
@@ -1,4 +1,5 @@
import time
+import calendar
import datetime
import unittest
import locale
@@ -210,6 +211,13 @@ def test_time_gmtime():
assert time_struct.tm_isdst == -1
+def test_calendar_timegm():
+ time_struct = time.gmtime()
+ assert calendar.timegm(time_struct) != 1326511294
+ with freeze_time('2012-01-14 03:21:34'):
+ assert calendar.timegm(time_struct) == 1326511294
+
+
@pytest.mark.skipif(not HAS_CLOCK,
reason="time.clock was removed in Python 3.8")
def test_time_clock():
@@ -644,6 +652,8 @@ def test_should_use_real_time():
from freezegun import api
api.call_stack_inspection_limit = 100 # just to increase coverage
+ current_time = time.gmtime()
+
with freeze_time(frozen):
assert time.time() == expected_frozen
# assert time.localtime() == expected_frozen_local
@@ -653,6 +663,8 @@ def test_should_use_real_time():
if HAS_TIME_NS:
assert time.time_ns() == expected_frozen * 1e9
+ assert calendar.timegm(current_time) == expected_frozen
+
with freeze_time(frozen, ignore=['_pytest', 'nose']):
assert time.time() != expected_frozen
# assert time.localtime() != expected_frozen_local
@@ -662,6 +674,8 @@ def test_should_use_real_time():
if HAS_TIME_NS:
assert time.time_ns() != expected_frozen * 1e9
+ assert calendar.timegm(current_time) != expected_frozen
+
@pytest.mark.skipif(not HAS_TIME_NS,
reason="time.time_ns is present only on 3.7 and above")
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 0.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"numpy>=1.16.0",
"pandas>=1.0.0"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup==1.2.2
-e git+https://github.com/spulec/freezegun.git@f70b77c0867fe3bb78cb58570ece031d7fdd34e7#egg=freezegun
iniconfig==2.1.0
numpy==2.0.2
packaging==24.2
pandas==2.2.3
pluggy==1.5.0
pytest==8.3.5
python-dateutil==2.9.0.post0
pytz==2025.2
six==1.17.0
tomli==2.2.1
tzdata==2025.2
| name: freezegun
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- freezegun==0.3.12
- iniconfig==2.1.0
- numpy==2.0.2
- packaging==24.2
- pandas==2.2.3
- pluggy==1.5.0
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- pytz==2025.2
- six==1.17.0
- tomli==2.2.1
- tzdata==2025.2
prefix: /opt/conda/envs/freezegun
| [
"tests/test_class_import.py::test_import_datetime_works",
"tests/test_class_import.py::test_import_date_works",
"tests/test_class_import.py::test_import_time",
"tests/test_class_import.py::test_start_and_stop_works",
"tests/test_class_import.py::test_isinstance_works",
"tests/test_class_import.py::test_issubclass_works",
"tests/test_class_import.py::test_fake_uses_real_when_ignored",
"tests/test_class_import.py::test_can_ignore_email_module",
"tests/test_class_import.py::test_avoid_replacing_equal_to_anything",
"tests/test_class_import.py::test_import_localtime",
"tests/test_class_import.py::test_fake_gmtime_function",
"tests/test_class_import.py::test_fake_strftime_function",
"tests/test_class_import.py::test_none_as_initial",
"tests/test_datetimes.py::test_simple_api",
"tests/test_datetimes.py::test_tz_offset",
"tests/test_datetimes.py::test_timedelta_tz_offset",
"tests/test_datetimes.py::test_tz_offset_with_today",
"tests/test_datetimes.py::test_zero_tz_offset_with_time",
"tests/test_datetimes.py::test_tz_offset_with_time",
"tests/test_datetimes.py::test_time_with_microseconds",
"tests/test_datetimes.py::test_time_with_dst",
"tests/test_datetimes.py::test_manual_increment",
"tests/test_datetimes.py::test_manual_increment_seconds",
"tests/test_datetimes.py::test_move_to",
"tests/test_datetimes.py::test_bad_time_argument",
"tests/test_datetimes.py::test_time_gmtime",
"tests/test_datetimes.py::test_calendar_timegm",
"tests/test_datetimes.py::test_time_localtime",
"tests/test_datetimes.py::test_strftime",
"tests/test_datetimes.py::test_real_strftime_fall_through",
"tests/test_datetimes.py::test_date_object",
"tests/test_datetimes.py::test_old_date_object",
"tests/test_datetimes.py::test_invalid_type",
"tests/test_datetimes.py::test_datetime_object",
"tests/test_datetimes.py::test_function_object",
"tests/test_datetimes.py::test_lambda_object",
"tests/test_datetimes.py::test_generator_object",
"tests/test_datetimes.py::test_old_datetime_object",
"tests/test_datetimes.py::test_decorator",
"tests/test_datetimes.py::test_decorator_wrapped_attribute",
"tests/test_datetimes.py::Tester::test_the_class",
"tests/test_datetimes.py::Tester::test_still_the_same",
"tests/test_datetimes.py::Tester::test_class_name_preserved_by_decorator",
"tests/test_datetimes.py::Tester::test_class_decorator_ignores_nested_class",
"tests/test_datetimes.py::Tester::test_class_decorator_wraps_callable_object_py3",
"tests/test_datetimes.py::Tester::test_class_decorator_respects_staticmethod",
"tests/test_datetimes.py::test_nice_datetime",
"tests/test_datetimes.py::test_datetime_date_method",
"tests/test_datetimes.py::test_context_manager",
"tests/test_datetimes.py::test_nested_context_manager",
"tests/test_datetimes.py::test_nested_context_manager_with_tz_offsets",
"tests/test_datetimes.py::test_isinstance_with_active",
"tests/test_datetimes.py::test_isinstance_without_active",
"tests/test_datetimes.py::TestUnitTestClassDecorator::test_class_decorator_works_on_unittest",
"tests/test_datetimes.py::TestUnitTestClassDecorator::test_class_name_preserved_by_decorator",
"tests/test_datetimes.py::TestUnitTestClassDecoratorWithNoSetUpOrTearDown::test_class_decorator_works_on_unittest",
"tests/test_datetimes.py::TestUnitTestClassDecoratorSubclass::test_class_decorator_works_on_unittest",
"tests/test_datetimes.py::TestUnitTestClassDecoratorSubclass::test_class_name_preserved_by_decorator",
"tests/test_datetimes.py::UnfrozenInheritedTests::test_time_is_not_frozen",
"tests/test_datetimes.py::FrozenInheritedTests::test_time_is_frozen",
"tests/test_datetimes.py::TestOldStyleClasses::test_direct_method",
"tests/test_datetimes.py::TestOldStyleClasses::test_inherited_method",
"tests/test_datetimes.py::test_min_and_max",
"tests/test_datetimes.py::test_freeze_with_timezone_aware_datetime_in_utc",
"tests/test_datetimes.py::test_freeze_with_timezone_aware_datetime_in_non_utc",
"tests/test_datetimes.py::test_time_with_nested",
"tests/test_datetimes.py::test_should_use_real_time",
"tests/test_datetimes.py::test_time_ns"
] | [
"tests/test_class_import.py::test_import_after_start"
] | [] | [] | Apache License 2.0 | 5,579 | 617 | [
"freezegun/api.py"
] |
|
markusressel__container-app-conf-28 | c68b268c9981fc2eabafa8ab060d0196dfab70ee | 2019-10-14 03:04:44 | c68b268c9981fc2eabafa8ab060d0196dfab70ee | diff --git a/container_app_conf/__init__.py b/container_app_conf/__init__.py
index edc7853..8100383 100644
--- a/container_app_conf/__init__.py
+++ b/container_app_conf/__init__.py
@@ -98,6 +98,7 @@ class ConfigBase:
Loads the configuration from all available sources
"""
for source in reversed(self.data_sources):
+ source.load()
for entry in self._config_entries.values():
if source.has(entry):
entry.value = source.get(entry)
diff --git a/container_app_conf/source/__init__.py b/container_app_conf/source/__init__.py
index 775e152..f34e59d 100644
--- a/container_app_conf/source/__init__.py
+++ b/container_app_conf/source/__init__.py
@@ -28,13 +28,38 @@ LOGGER = logging.getLogger(__name__)
class DataSource:
+ def __init__(self):
+ self.root = {}
+
+ def load(self):
+ """
+ Loads all values of this data source into memory
+ """
+ self.root = self._load()
+
+ def _load(self) -> dict:
+ """
+ Loads all values of this data source and returns them as a dictionary tree
+ :return: value tree
+ """
+ raise NotImplementedError()
+
def has(self, entry: ConfigEntry) -> bool:
"""
Checks whether the data source has a value for the given config entry
:param entry: the config entry to check
:return: True if the source contains a value for the given entry, False otherwise
"""
- raise NotImplementedError()
+ value = self.root
+ if value is None:
+ return False
+
+ for key in entry.key_path:
+ value = value.get(key)
+ if value is None:
+ return False
+
+ return True
def get(self, entry: ConfigEntry) -> any:
"""
@@ -42,7 +67,16 @@ class DataSource:
:param entry: config entry
:return: value
"""
- raise NotImplementedError()
+ value = self.root
+ if value is None:
+ return None
+
+ for key in entry.key_path:
+ value = value.get(key)
+ if value is None:
+ return entry.value
+
+ return value
def write_reference(self, reference: dict):
"""
@@ -63,7 +97,7 @@ class FilesystemSource(DataSource):
:param file_name: allowed config file name(s)
:param file_extension: allowed config file extension(s)
"""
- self.root = {}
+ super().__init__()
if path is None:
from container_app_conf import DEFAULT_CONFIG_FILE_PATHS
self.paths = DEFAULT_CONFIG_FILE_PATHS
@@ -75,18 +109,15 @@ class FilesystemSource(DataSource):
else:
self.file_extensions = file_extension if isinstance(file_extension, list) else [file_extension]
- def load(self):
- """
- Loads the config file content into memory
- """
+ def _load(self) -> dict:
file_path = self._find_config_file()
if file_path is None:
LOGGER.debug("No config file found in paths: {}".format(self.paths))
- return
+ return {}
- self.root = self._load(file_path)
+ return self._load_file(file_path)
- def _load(self, file_path: str) -> dict:
+ def _load_file(self, file_path: str) -> dict:
"""
Parses the file content into memory
:param file_path: the path of the file to parse
@@ -94,30 +125,6 @@ class FilesystemSource(DataSource):
"""
raise NotImplementedError()
- def has(self, entry: ConfigEntry) -> bool:
- value = self.root
- if value is None:
- return False
-
- for key in entry.key_path:
- value = value.get(key)
- if value is None:
- return False
-
- return True
-
- def get(self, entry: ConfigEntry) -> any:
- value = self.root
- if value is None:
- return None
-
- for key in entry.key_path:
- value = value.get(key)
- if value is None:
- return entry.value
-
- return value
-
def write_reference(self, reference: dict):
"""
Writes a reference config file
diff --git a/container_app_conf/source/env_source.py b/container_app_conf/source/env_source.py
index db87c44..1ec9d95 100644
--- a/container_app_conf/source/env_source.py
+++ b/container_app_conf/source/env_source.py
@@ -28,6 +28,7 @@ class EnvSource(DataSource):
"""
Data source utilizing environment variables
"""
+ KEY_SPLIT_CHAR = "_"
def has(self, entry: ConfigEntry) -> bool:
return self.env_key(entry) in os.environ.keys()
@@ -36,6 +37,10 @@ class EnvSource(DataSource):
key = self.env_key(entry)
return os.environ.get(key, None)
- @staticmethod
- def env_key(entry: ConfigEntry) -> str:
- return "_".join(entry.key_path).upper()
+ def env_key(self, entry: ConfigEntry) -> str:
+ return self.KEY_SPLIT_CHAR.join(entry.key_path).upper()
+
+ def _load(self) -> dict:
+ # loading env is pointless since it is already in memory
+ # instead we override has() and get() methods to directly access the environment
+ pass
diff --git a/container_app_conf/source/json_source.py b/container_app_conf/source/json_source.py
index 50bb032..9769355 100644
--- a/container_app_conf/source/json_source.py
+++ b/container_app_conf/source/json_source.py
@@ -31,7 +31,7 @@ class JsonSource(FilesystemSource):
"""
DEFAULT_FILE_EXTENSIONS = ['json']
- def _load(self, file_path: str) -> dict:
+ def _load_file(self, file_path: str) -> dict:
with open(file_path, 'r') as file:
return json.load(file)
diff --git a/container_app_conf/source/toml_source.py b/container_app_conf/source/toml_source.py
index 357008b..9fc8e57 100644
--- a/container_app_conf/source/toml_source.py
+++ b/container_app_conf/source/toml_source.py
@@ -32,7 +32,7 @@ class TomlSource(FilesystemSource):
"""
DEFAULT_FILE_EXTENSIONS = ['toml', 'tml']
- def _load(self, file_path: str) -> dict:
+ def _load_file(self, file_path: str) -> dict:
with open(file_path, 'r') as file:
return toml.load(file)
diff --git a/container_app_conf/source/yaml_source.py b/container_app_conf/source/yaml_source.py
index 61b294e..eb9b9b5 100644
--- a/container_app_conf/source/yaml_source.py
+++ b/container_app_conf/source/yaml_source.py
@@ -32,7 +32,7 @@ class YamlSource(FilesystemSource):
"""
DEFAULT_FILE_EXTENSIONS = ['yaml', 'yml']
- def _load(self, file_path: str) -> dict:
+ def _load_file(self, file_path: str) -> dict:
with open(file_path, 'r') as ymlfile:
return yaml.load(ymlfile, Loader=yaml.FullLoader)
| Automatically load data source values
**Is your feature request related to a problem? Please describe.**
When using a custom YAML data source the dev has to call `load()` manually which is cumbersome and easy to forget.
**Describe the solution you'd like**
Automatically load values of data sources when loading config. | markusressel/container-app-conf | diff --git a/tests/data_source/__init__.py b/tests/data_source/__init__.py
new file mode 100644
index 0000000..3ccbd83
--- /dev/null
+++ b/tests/data_source/__init__.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2019 Markus Ressel
+# .
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+# .
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+# .
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+from typing import Dict
+
+from container_app_conf import DataSource, ConfigEntry
+
+
+class MemoryDataSource(DataSource):
+
+ def items(self) -> Dict[ConfigEntry, any]:
+ """
+ :return: dictionary of "config entry" -> "value" items
+ """
+ raise NotImplementedError()
+
+ def _load(self) -> dict:
+ data = {}
+ for entry, value in self.items().items():
+ key_path = entry.key_path
+ current_level = data
+ for key in key_path[:-1]:
+ current_level[key] = {}
+ current_level = current_level[key]
+ current_level[key_path[-1]] = value
+
+ return data
diff --git a/tests/source_test.py b/tests/data_source/source_test.py
similarity index 87%
rename from tests/source_test.py
rename to tests/data_source/source_test.py
index 34875aa..9a186b7 100644
--- a/tests/source_test.py
+++ b/tests/data_source/source_test.py
@@ -17,45 +17,40 @@
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
-from container_app_conf import DataSource, ConfigEntry
+from typing import Dict
+
+from container_app_conf import ConfigEntry
from container_app_conf.entry.int import IntConfigEntry
from container_app_conf.entry.string import StringConfigEntry
from container_app_conf.source.json_source import JsonSource
from container_app_conf.source.toml_source import TomlSource
from container_app_conf.source.yaml_source import YamlSource
from tests import TestBase
+from tests.data_source import MemoryDataSource
from tests.singleton_test import TestConfigBase2
-def _key(entry: ConfigEntry) -> str:
- return "_".join(entry.key_path)
-
-
-class MemoryDataSource(DataSource):
- data = {
- _key(TestConfigBase2.BOOL): True
- }
-
- def has(self, entry: ConfigEntry) -> bool:
- key = _key(entry)
- return key in self.data.keys()
+class MemoryDataSource1(MemoryDataSource):
- def get(self, entry: ConfigEntry) -> any:
- key = _key(entry)
- return self.data[key]
+ def items(self) -> Dict[ConfigEntry, any]:
+ return {
+ TestConfigBase2.BOOL: True
+ }
class MemoryDataSource2(MemoryDataSource):
- data = {
- _key(TestConfigBase2.BOOL): False
- }
+
+ def items(self) -> Dict[ConfigEntry, any]:
+ return {
+ TestConfigBase2.BOOL: False
+ }
class TestDataSource(TestBase):
def test_priority(self):
conf = TestConfigBase2(data_sources=[
- MemoryDataSource(),
+ MemoryDataSource1(),
MemoryDataSource2()
], singleton=False)
@@ -63,7 +58,7 @@ class TestDataSource(TestBase):
conf2 = TestConfigBase2(data_sources=[
MemoryDataSource2(),
- MemoryDataSource()
+ MemoryDataSource1()
], singleton=False)
self.assertFalse(conf2.BOOL.value)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 6
} | 4.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
-e git+https://github.com/markusressel/container-app-conf.git@c68b268c9981fc2eabafa8ab060d0196dfab70ee#egg=container_app_conf
exceptiongroup==1.2.2
importlib-metadata==6.7.0
iniconfig==2.0.0
packaging==24.0
pluggy==1.2.0
py-range-parse==1.0.5
pytest==7.4.4
python-dateutil==2.9.0.post0
pytimeparse==1.1.8
PyYAML==6.0.1
six==1.17.0
toml==0.10.2
tomli==2.0.1
typing_extensions==4.7.1
zipp==3.15.0
| name: container-app-conf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- packaging==24.0
- pluggy==1.2.0
- py-range-parse==1.0.5
- pytest==7.4.4
- python-dateutil==2.9.0.post0
- pytimeparse==1.1.8
- pyyaml==6.0.1
- six==1.17.0
- toml==0.10.2
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/container-app-conf
| [
"tests/data_source/source_test.py::TestDataSource::test_priority"
] | [] | [
"tests/data_source/source_test.py::TestDataSource::test_json",
"tests/data_source/source_test.py::TestDataSource::test_toml",
"tests/data_source/source_test.py::TestDataSource::test_yaml"
] | [] | MIT License | 5,584 | 1,763 | [
"container_app_conf/__init__.py",
"container_app_conf/source/__init__.py",
"container_app_conf/source/env_source.py",
"container_app_conf/source/json_source.py",
"container_app_conf/source/toml_source.py",
"container_app_conf/source/yaml_source.py"
] |
|
RobinL__fuzzymatcher-50 | 5f0b19d50d5dcbcd930f9634d2bf429007b5efcc | 2019-10-14 10:06:02 | 5f0b19d50d5dcbcd930f9634d2bf429007b5efcc | diff --git a/fuzzymatcher/data_getter_sqlite.py b/fuzzymatcher/data_getter_sqlite.py
index c54eec8..e97808e 100644
--- a/fuzzymatcher/data_getter_sqlite.py
+++ b/fuzzymatcher/data_getter_sqlite.py
@@ -177,10 +177,21 @@ class DataGetter:
"""
# This fails if the special tokens 'and' or 'or' are in fts string! See issue 35!
- tokens_to_remove = ["AND", "OR"]
- tokens = [t for t in tokens if t not in tokens_to_remove]
+ tokens_to_escape = ["AND", "OR", "NEAR"]
+
+ def escape_token(t):
+ # return t
+ if t in tokens_to_escape:
+ return '"' + t + '"'
+ else:
+ return t
+
+
+ tokens = [escape_token(t) for t in tokens]
+
fts_string = " ".join(tokens)
+
if misspelling:
table_name = "_concat_all_alternatives"
else:
@@ -188,6 +199,7 @@ class DataGetter:
sql = get_records_sql.format(table_name, fts_string, self.return_records_limit)
+
cur = self.con.cursor()
cur.execute(sql)
results = cur.fetchall()
| OperationalError: malformed MATCH expression
Using `fuzzy_left_join`, I came across the above error while matching addresses.
It seems to relate to the following string: `'16 Orchard Court, Near Stepaside Park, Stepaside, Dublin, ireland'`
(note: I swapped out the street name for another for privacy reasons, but string length and structure is the same)
Code which I ran:
```python
import fuzzymatcher
import pandas as pd
df_left = contacts_unique_addresses
df_right = register
left_on = ["match_string"]
right_on = ["match_string"]
matched = fuzzymatcher.fuzzy_left_join(df_left, df_right, left_on, right_on)
```
And the error. Is it possible that my string is too long?
```python
OperationalError Traceback (most recent call last)
<ipython-input-78-6bd0c667558c> in <module>
7 right_on = ["match_string"]
8
----> 9 matched = fuzzymatcher.fuzzy_left_join(df_left, df_right, left_on, right_on)
~\Anaconda3\lib\site-packages\fuzzymatcher\__init__.py in fuzzy_left_join(df_left, df_right, left_on, right_on, left_id_col, right_id_col)
39 m = Matcher(dp, dg, s)
40 m.add_data(df_left, df_right, left_on, right_on, left_id_col, right_id_col)
---> 41 m.match_all()
42
43 return m.get_left_join_table()
~\Anaconda3\lib\site-packages\fuzzymatcher\matcher.py in match_all(self)
90
91 # Get a table that contains only the matches, scores and ids
---> 92 self.link_table = self._match_processed_data()
93
94 def get_formatted_link_table(self):
~\Anaconda3\lib\site-packages\fuzzymatcher\matcher.py in _match_processed_data(self)
134 log.debug(str_template.format(counter, (counter/total)*100, diff.minutes, diff.seconds))
135
--> 136 this_record.find_and_score_potential_matches()
137 link_table_list.extend(this_record.get_link_table_rows())
138
~\Anaconda3\lib\site-packages\fuzzymatcher\record.py in find_and_score_potential_matches(self)
74 def find_and_score_potential_matches(self):
75 # Each left_record has a list of left_record ids
---> 76 self.matcher.data_getter.get_potential_match_ids_from_record(self)
77
78 def get_link_table_rows(self):
~\Anaconda3\lib\site-packages\fuzzymatcher\data_getter_sqlite.py in get_potential_match_ids_from_record(self, rec_left)
91
92 for token_list in token_lists:
---> 93 self._search_specific_to_general_single(token_list, rec_left)
94 if not self._found_enough_matches(rec_left):
95 self._search_specific_to_general_band(token_list, rec_left)
~\Anaconda3\lib\site-packages\fuzzymatcher\data_getter_sqlite.py in _search_specific_to_general_single(self, token_list, rec_left)
115 for i in range(len(token_list)):
116 sub_tokens = token_list[i:]
--> 117 new_matches = self._tokens_to_matches(tuple(sub_tokens))
118
119 self._add_matches_to_potential_matches(new_matches, rec_left)
~\Anaconda3\lib\site-packages\fuzzymatcher\data_getter_sqlite.py in _tokens_to_matches(self, tokens, misspelling)
190
191 cur = self.con.cursor()
--> 192 cur.execute(sql)
193 results = cur.fetchall()
194
OperationalError: malformed MATCH expression: [NEAR IRELAND STEPASIDE STEPASIDE ORCHARD 16 COURT PARK DUBLIN]
``` | RobinL/fuzzymatcher | diff --git a/tests/data/left_token_escape.csv b/tests/data/left_token_escape.csv
new file mode 100644
index 0000000..6da12f6
--- /dev/null
+++ b/tests/data/left_token_escape.csv
@@ -0,0 +1,5 @@
+id,fname,mname,lname,dob,another_field
+1,or,or and,and,20/05/1980,other data
+2,or,or,or smith or,15/06/1990,more data
+3,near,and,near,20/05/1960,another thing
+
diff --git a/tests/data/right_token_escape.csv b/tests/data/right_token_escape.csv
new file mode 100644
index 0000000..1b3232b
--- /dev/null
+++ b/tests/data/right_token_escape.csv
@@ -0,0 +1,4 @@
+id,name,middlename,surname,date,other
+1,or,or,or smith or,15/06/1990,more data
+2,near,and,near,20/05/1960,another thing
+3,or,or and,and,20/05/1980,other data
\ No newline at end of file
diff --git a/tests/test_misc.py b/tests/test_misc.py
index cbb1668..2377c62 100644
--- a/tests/test_misc.py
+++ b/tests/test_misc.py
@@ -17,4 +17,31 @@ class TestNulls(unittest.TestCase):
on = ["first_name", "surname", "dob", "city"]
- flj = link_table(df_left, df_right, on, on)
\ No newline at end of file
+ flj = link_table(df_left, df_right, on, on)
+
+
+class TestNulls(unittest.TestCase):
+ """
+ Test what happens when the user provides input data with
+ fts4 match expression keyworks like AND, OR, NEAR
+ """
+
+ def test_nulls_no_errors(self):
+ """
+
+ """
+
+
+ df_left = pd.read_csv("tests/data/left_token_escape.csv")
+ df_right = pd.read_csv("tests/data/right_token_escape.csv")
+
+ # Columns to match on from df_left
+ left_on = ["fname", "mname", "lname"]
+
+ # Columns to match on from df_right
+ right_on = ["name", "middlename", "surname"]
+
+ on = ["first_name", "surname", ]
+
+ flj = link_table(df_left, df_right, left_on, right_on,
+ left_id_col="id", right_id_col="id")
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 1
} | 0.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc libsqlite3-dev"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
exceptiongroup==1.2.2
-e git+https://github.com/RobinL/fuzzymatcher.git@5f0b19d50d5dcbcd930f9634d2bf429007b5efcc#egg=fuzzymatcher
fuzzywuzzy==0.18.0
importlib-metadata==6.7.0
iniconfig==2.0.0
Levenshtein==0.23.0
Metaphone==0.6
numpy==1.21.6
packaging==24.0
pandas==1.3.5
pluggy==1.2.0
pytest==7.4.4
python-dateutil==2.9.0.post0
python-Levenshtein==0.23.0
pytz==2025.2
rapidfuzz==3.4.0
six==1.17.0
tomli==2.0.1
typing_extensions==4.7.1
zipp==3.15.0
| name: fuzzymatcher
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- exceptiongroup==1.2.2
- fuzzywuzzy==0.18.0
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- levenshtein==0.23.0
- metaphone==0.6
- numpy==1.21.6
- packaging==24.0
- pandas==1.3.5
- pluggy==1.2.0
- pytest==7.4.4
- python-dateutil==2.9.0.post0
- python-levenshtein==0.23.0
- pytz==2025.2
- rapidfuzz==3.4.0
- six==1.17.0
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/fuzzymatcher
| [
"tests/test_misc.py::TestNulls::test_nulls_no_errors"
] | [] | [] | [] | MIT License | 5,585 | 318 | [
"fuzzymatcher/data_getter_sqlite.py"
] |
|
MaxGhenis__microdf-31 | 2e3330a96328a0ac293771bc1b46349689a02e00 | 2019-10-15 05:09:58 | 2e3330a96328a0ac293771bc1b46349689a02e00 | diff --git a/microdf/__init__.py b/microdf/__init__.py
index 993a7dd..5033ad2 100644
--- a/microdf/__init__.py
+++ b/microdf/__init__.py
@@ -6,6 +6,7 @@ from .custom_taxes import *
from .income_measures import *
from .inequality import *
from .style import *
+from .tax import *
from .ubi import *
from .utils import *
diff --git a/microdf/tax.py b/microdf/tax.py
new file mode 100644
index 0000000..5979a58
--- /dev/null
+++ b/microdf/tax.py
@@ -0,0 +1,21 @@
+import pandas as pd
+
+
+def tax_from_mtrs(val, brackets, rates):
+ # Calculates tax liability based on a marginal tax rate schedule.
+ #
+ # Args:
+ # val: Value to assess tax on, e.g. wealth or income (list or Series).
+ # brackets: Left side of each bracket (list or Series).
+ # rates: Rate corresponding to each bracket.
+ #
+ # Returns:
+ # Series of tax liabilities with the same size as val.
+ df_tax = pd.DataFrame({'brackets': brackets, 'rates': rates})
+ df_tax['base_tax'] = df_tax.brackets.\
+ sub(df_tax.brackets.shift(fill_value=0)).\
+ mul(df_tax.rates.shift(fill_value=0)).cumsum()
+ rows = df_tax.brackets.searchsorted(val, side='right') - 1
+ income_bracket_df = df_tax.loc[rows].reset_index(drop=True)
+ return pd.Series(val).sub(income_bracket_df.brackets).\
+ mul(income_bracket_df.rates).add(income_bracket_df.base_tax)
| Add function to calculate tax from a MTR schedule
Basic tax calculation:
```
def tax_from_mtrs(val, brackets, rates):
# Args:
# val: Value to assess tax on, e.g. wealth or income (list or Series).
# brackets: Left side of each bracket (list or Series).
# rates: Rate corresponding to each bracket.
df_tax = pd.DataFrame({'brackets': brackets, 'rates': rates})
df_tax['base_tax'] = df_tax.brackets.\
sub(df_tax.brackets.shift(fill_value=0)).\
mul(df_tax.rates.shift(fill_value=0)).cumsum()
rows = df_tax.brackets.searchsorted(val, side='right') - 1
income_bracket_df = df_tax.loc[rows].reset_index(drop=True)
return pd.Series(val).sub(income_bracket_df.brackets).\
mul(income_bracket_df.rates).add(income_bracket_df.base_tax)
``` | MaxGhenis/microdf | diff --git a/microdf/tests/test_tax.py b/microdf/tests/test_tax.py
new file mode 100644
index 0000000..f4cec43
--- /dev/null
+++ b/microdf/tests/test_tax.py
@@ -0,0 +1,13 @@
+import pytest
+import pandas as pd
+import microdf as mdf
+
+
+def test_tax():
+ # Consider a MTR schedule of 0% up to 10,000, then 10% after that.
+ BRACKETS = [0, 10e3]
+ RATES = [0, 0.1]
+ INCOME = [0, 5e3, 10e3, 10e3 + 1, 20e3]
+ EXPECTED = [0, 0, 0, 0.1, 1e3]
+ res = mdf.tax_from_mtrs(INCOME, BRACKETS, RATES)
+ pd.testing.assert_series_equal(res, pd.Series(EXPECTED))
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_added_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": [
"environment.yml"
],
"install": "pip install -e .[taxcalc]",
"log_parser": "parse_log_pytest",
"no_use_env": true,
"packages": "environment.yml",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiohttp @ file:///home/conda/feedstock_root/build_artifacts/aiohttp_1649013154501/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1734342155601/work
async-timeout @ file:///home/conda/feedstock_root/build_artifacts/async-timeout_1691763562544/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1741918516150/work
bokeh @ file:///home/conda/feedstock_root/build_artifacts/bokeh_1706215790147/work
Brotli @ file:///home/conda/feedstock_root/build_artifacts/brotli-split_1648883617327/work
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1739515848642/work/certifi
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1636046055389/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1733218098505/work
contourpy @ file:///home/conda/feedstock_root/build_artifacts/contourpy_1652954513473/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1733332471406/work
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1733208806608/work
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1651017733844/work
frozenlist @ file:///home/conda/feedstock_root/build_artifacts/frozenlist_1648771673823/work
fsspec @ file:///home/conda/feedstock_root/build_artifacts/fsspec_1743361113926/work
h2 @ file:///home/conda/feedstock_root/build_artifacts/h2_1738578511449/work
hpack @ file:///home/conda/feedstock_root/build_artifacts/hpack_1737618293087/work
hyperframe @ file:///home/conda/feedstock_root/build_artifacts/hyperframe_1737618333194/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1733211830134/work
importlib_resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1736252299705/work
iniconfig @ file:///home/conda/feedstock_root/build_artifacts/iniconfig_1733223141826/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1741263328855/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1648854392795/work
llvmlite @ file:///croot/llvmlite_1736366675558/work
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1648737556467/work
marshmallow @ file:///home/conda/feedstock_root/build_artifacts/marshmallow_1738612362469/work
matplotlib @ file:///croot/matplotlib-suite_1713336378214/work
matplotlib-label-lines==0.8.0
-e git+https://github.com/MaxGhenis/microdf.git@2e3330a96328a0ac293771bc1b46349689a02e00#egg=microdf
more-itertools==10.6.0
multidict @ file:///home/conda/feedstock_root/build_artifacts/multidict_1648882420423/work
munkres==1.1.4
numba @ file:///croot/numba_1738606613869/work
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1651020388495/work
olefile @ file:///home/conda/feedstock_root/build_artifacts/olefile_1734769783178/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1733203243479/work
pandas==1.4.2
paramtools @ file:///home/conda/feedstock_root/build_artifacts/paramtools_1736456544755/work
patsy @ file:///home/conda/feedstock_root/build_artifacts/patsy_1733792384640/work
Pillow @ file:///home/conda/feedstock_root/build_artifacts/pillow_1630696601414/work
pluggy @ file:///home/conda/feedstock_root/build_artifacts/pluggy_1733222765875/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/bld/rattler-build_pycparser_1733195786/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
PyQt5==5.15.4
PyQt5-sip==12.9.0
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1733217236728/work
pytest @ file:///home/conda/feedstock_root/build_artifacts/pytest_1740946542080/work
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1733215673016/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1742920838005/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1648757097602/work
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1733217035951/work
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1653073865807/work
seaborn @ file:///home/conda/feedstock_root/build_artifacts/seaborn-split_1733730015268/work
sip @ file:///home/conda/feedstock_root/build_artifacts/sip_1646101340685/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1733380938961/work
sortedcontainers @ file:///home/conda/feedstock_root/build_artifacts/sortedcontainers_1738440353519/work
statsmodels @ file:///home/conda/feedstock_root/build_artifacts/statsmodels_1644535581977/work
taxcalc @ file:///home/conda/feedstock_root/build_artifacts/taxcalc_1727883798934/work
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1734091811753/work
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1733256695513/work
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1648827245914/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/bld/rattler-build_typing_extensions_1743201626/work
unicodedata2 @ file:///home/conda/feedstock_root/build_artifacts/unicodedata2_1649111919389/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1734859416348/work
xyzservices @ file:///home/conda/feedstock_root/build_artifacts/xyzservices_1737234886776/work
yarl @ file:///home/conda/feedstock_root/build_artifacts/yarl_1648966524636/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1732827521216/work
zstandard @ file:///croot/zstandard_1731356346222/work
| name: microdf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- aiohttp=3.8.1=py39hb9d737c_1
- aiosignal=1.3.2=pyhd8ed1ab_0
- async-timeout=4.0.3=pyhd8ed1ab_0
- attrs=25.3.0=pyh71513ae_0
- bokeh=3.3.4=pyhd8ed1ab_0
- brotli=1.0.9=h166bdaf_7
- brotli-bin=1.0.9=h166bdaf_7
- brotli-python=1.0.9=py39h5a03fae_7
- bzip2=1.0.8=h7f98852_4
- c-ares=1.19.1=h5eee18b_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2025.1.31=pyhd8ed1ab_0
- cffi=1.15.0=py39h4bc2ebd_0
- charset-normalizer=2.1.1=pyhd8ed1ab_0
- colorama=0.4.6=pyhd8ed1ab_1
- contourpy=1.0.2=py39hf939315_2
- cycler=0.12.1=pyhd8ed1ab_1
- cyrus-sasl=2.1.28=h52b45da_1
- dbus=1.13.18=hb2f20db_0
- exceptiongroup=1.2.2=pyhd8ed1ab_1
- expat=2.6.4=h6a678d5_0
- fontconfig=2.14.1=h55d465d_3
- fonttools=4.33.3=py39hb9d737c_0
- freetype=2.10.4=h0708190_1
- frozenlist=1.3.0=py39hb9d737c_1
- fsspec=2025.3.1=pyhd8ed1ab_0
- glib=2.78.4=h6a678d5_0
- glib-tools=2.78.4=h6a678d5_0
- gst-plugins-base=1.14.1=h6a678d5_1
- gstreamer=1.14.1=h5eee18b_1
- h2=4.2.0=pyhd8ed1ab_0
- hpack=4.1.0=pyhd8ed1ab_0
- hyperframe=6.1.0=pyhd8ed1ab_0
- icu=73.1=h6a678d5_0
- idna=3.10=pyhd8ed1ab_1
- importlib_resources=6.5.2=pyhd8ed1ab_0
- iniconfig=2.0.0=pyhd8ed1ab_1
- jbig=2.1=h7f98852_2003
- jinja2=3.1.6=pyhd8ed1ab_0
- jpeg=9e=h166bdaf_1
- kiwisolver=1.4.2=py39hf939315_1
- krb5=1.20.1=h143b758_1
- lcms2=2.12=hddcbb42_0
- ld_impl_linux-64=2.40=h12ee557_0
- lerc=2.2.1=h9c3ff4c_0
- libabseil=20250127.0=cxx17_h6a678d5_0
- libblas=3.9.0=16_linux64_openblas
- libbrotlicommon=1.0.9=h166bdaf_7
- libbrotlidec=1.0.9=h166bdaf_7
- libbrotlienc=1.0.9=h166bdaf_7
- libcblas=3.9.0=16_linux64_openblas
- libclang=14.0.6=default_hc6dbbc7_2
- libclang13=14.0.6=default_he11475f_2
- libcups=2.4.2=h2d74bed_1
- libcurl=8.12.1=hc9e6f67_0
- libdeflate=1.7=h7f98852_5
- libedit=3.1.20230828=h5eee18b_0
- libev=4.33=h516909a_1
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgfortran-ng=13.2.0=h69a702a_0
- libgfortran5=13.2.0=ha4646dd_0
- libglib=2.78.4=hdc74915_0
- libgomp=11.2.0=h1234567_1
- libiconv=1.17=h166bdaf_0
- liblapack=3.9.0=16_linux64_openblas
- libllvm14=14.0.6=hecde1de_4
- libnghttp2=1.57.0=h2d74bed_0
- libopenblas=0.3.21=h043d6bf_0
- libpng=1.6.39=h5eee18b_0
- libpq=17.4=hdbd6064_0
- libprotobuf=5.29.3=hc99497a_0
- libssh2=1.11.1=h251f7ec_0
- libstdcxx-ng=11.2.0=h1234567_1
- libtiff=4.3.0=hf544144_1
- libuuid=1.41.5=h5eee18b_0
- libwebp-base=1.2.2=h7f98852_1
- libxcb=1.15=h7f8727e_0
- libxkbcommon=1.0.3=he3ba5ed_0
- libxml2=2.13.5=hfdd30dd_0
- llvmlite=0.43.0=py39h6a678d5_1
- lz4-c=1.9.4=h6a678d5_1
- markupsafe=2.1.1=py39hb9d737c_1
- marshmallow=3.26.1=pyhd8ed1ab_0
- matplotlib=3.8.4=py39hf3d152e_2
- matplotlib-base=3.8.4=py39h1128e8f_0
- multidict=6.0.2=py39hb9d737c_1
- munkres=1.1.4=pyh9f0ad1d_0
- mysql=8.4.0=h721767e_2
- ncurses=6.4=h6a678d5_0
- numba=0.60.0=py39h6a678d5_1
- numpy=1.22.3=py39hc58783e_2
- olefile=0.47=pyhd8ed1ab_1
- openjpeg=2.4.0=hb52868f_1
- openldap=2.6.4=h42fbc30_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=pyhd8ed1ab_2
- pandas=1.4.2=py39h1832856_1
- paramtools=0.19.0=pyhd8ed1ab_1
- patsy=1.0.1=pyhd8ed1ab_1
- pcre2=10.42=hebb0a14_1
- pillow=8.3.2=py39ha612740_0
- pip=25.0.1=pyh8b19718_0
- pluggy=1.5.0=pyhd8ed1ab_1
- pycparser=2.22=pyh29332c3_1
- pyparsing=3.0.9=pyhd8ed1ab_0
- pyqt=5.15.4=py39h5a03fae_0
- pyqt5-sip=12.9.0=py39h5a03fae_0
- pysocks=1.7.1=pyha55dd90_7
- pytest=8.3.5=pyhd8ed1ab_0
- python=3.9.21=he870216_1
- python-dateutil=2.9.0.post0=pyhff2d567_1
- python_abi=3.9=2_cp39
- pytz=2025.2=pyhd8ed1ab_0
- pyyaml=6.0=py39hb9d737c_4
- qt-main=5.15.2=hb6262e9_12
- readline=8.2=h5eee18b_0
- requests=2.32.3=pyhd8ed1ab_1
- scipy=1.8.1=py39he49c0e8_0
- seaborn=0.13.2=hd8ed1ab_3
- seaborn-base=0.13.2=pyhd8ed1ab_3
- setuptools=75.8.0=py39h06a4308_0
- sip=6.5.1=py39he80948d_2
- six=1.17.0=pyhd8ed1ab_0
- sortedcontainers=2.4.0=pyhd8ed1ab_1
- sqlite=3.45.3=h5eee18b_0
- statsmodels=0.13.2=py39hce5d2b2_0
- taxcalc=4.3.0=pyhd8ed1ab_0
- tbb=2021.8.0=hdb19cb5_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd8ed1ab_1
- tomli=2.2.1=pyhd8ed1ab_1
- tornado=6.1=py39hb9d737c_3
- typing-extensions=4.13.0=h9fa5a19_1
- typing_extensions=4.13.0=pyh29332c3_1
- tzdata=2025a=h04d1e81_0
- unicodedata2=14.0.0=py39hb9d737c_1
- urllib3=2.3.0=pyhd8ed1ab_0
- wheel=0.45.1=py39h06a4308_0
- xyzservices=2025.1.0=pyhd8ed1ab_0
- xz=5.6.4=h5eee18b_1
- yaml=0.2.5=h7f98852_2
- yarl=1.7.2=py39hb9d737c_2
- zipp=3.21.0=pyhd8ed1ab_1
- zlib=1.2.13=h5eee18b_1
- zstandard=0.23.0=py39h2c38b39_1
- zstd=1.5.6=hc292b87_0
- pip:
- matplotlib-label-lines==0.8.0
- more-itertools==10.6.0
prefix: /opt/conda/envs/microdf
| [
"microdf/tests/test_tax.py::test_tax"
] | [] | [] | [] | MIT License | 5,593 | 441 | [
"microdf/__init__.py"
] |
|
gridsmartercities__aws-lambda-decorators-101 | a1e4059f360920d876d8310644b82dbe7a4967db | 2019-10-15 14:30:31 | a1e4059f360920d876d8310644b82dbe7a4967db | diff --git a/aws_lambda_decorators/classes.py b/aws_lambda_decorators/classes.py
index 7c66a47..ec6847a 100644
--- a/aws_lambda_decorators/classes.py
+++ b/aws_lambda_decorators/classes.py
@@ -179,11 +179,10 @@ class Parameter(ValidatedParameter, BaseParameter):
"""
for path_key in filter(lambda item: item != "", self._path.split(PATH_DIVIDER)):
real_key, annotation = Parameter.get_annotations_from_key(path_key)
- if real_key in dict_value:
+ if dict_value and real_key in dict_value:
dict_value = decode(annotation, dict_value[real_key])
else:
dict_value = self._default
- break
if not self._name:
self._name = real_key
diff --git a/setup.py b/setup.py
index 7ecd5a1..b183be9 100644
--- a/setup.py
+++ b/setup.py
@@ -3,7 +3,7 @@ from setuptools import setup, find_packages
LONG_DESCRIPTION = open("README.md").read()
setup(name="aws-lambda-decorators",
- version="0.41",
+ version="0.42",
description="A set of python decorators to simplify aws python lambda development",
long_description=LONG_DESCRIPTION,
long_description_content_type="text/markdown",
| Missing extract data can set incorrect name in Parameter class
With this bug it's possible to set a default value for an incorrect key when extracting a parameter. It's also possible for incorrect key names to persist across unit tests when the Parameter object persists across tests (default behaviour in python's unittest).
Testing using `extract_from_event` with data:
```py
event = {
"body": "{}"
}
```
and handler:
```py
@extract_from_event(parameters=[Parameter(path="body[json]/optional/value", default="Hello")])
def handler(event, context, **kwargs):
return {
"statusCode": 200,
"body": json.dumps(kwargs)
}
```
The expected body would be `{"value": "Hello"}`, however the actual response is currently `{"optional": "Hello"}` | gridsmartercities/aws-lambda-decorators | diff --git a/tests/test_decorators.py b/tests/test_decorators.py
index 206965d..3739638 100644
--- a/tests/test_decorators.py
+++ b/tests/test_decorators.py
@@ -1,9 +1,14 @@
# pylint:disable=too-many-lines
+from http import HTTPStatus
+import json
+from json import JSONDecodeError
import unittest
from unittest.mock import patch, MagicMock
-from json import JSONDecodeError
+from uuid import uuid4
+
from botocore.exceptions import ClientError
from schema import Schema, And, Optional
+
from aws_lambda_decorators.classes import ExceptionHandler, Parameter, SSMParameter, ValidatedParameter
from aws_lambda_decorators.decorators import extract, extract_from_event, extract_from_context, handle_exceptions, \
log, response_body_as_json, extract_from_ssm, validate, handle_all_exceptions, cors
@@ -1496,3 +1501,76 @@ class DecoratorsTests(unittest.TestCase): # noqa: pylint - too-many-public-meth
response = handler(dictionary, None)
self.assertEqual(None, response)
+
+ def test_extract_from_event_missing_parameter_path(self):
+ event = {
+ "body": "{}"
+ }
+
+ @extract_from_event(parameters=[Parameter(path="body[json]/optional/value", default="Hello")])
+ def handler(event, context, **kwargs): # noqa
+ return {
+ "statusCode": HTTPStatus.OK,
+ "body": json.dumps(kwargs)
+ }
+
+ expected_body = json.dumps({
+ "value": "Hello"
+ })
+
+ response = handler(event, None)
+
+ self.assertEqual(HTTPStatus.OK, response["statusCode"])
+ self.assertEqual(expected_body, response["body"])
+
+
+class IsolatedDecoderTests(unittest.TestCase):
+ # Tests have been named so they run in a specific order
+
+ ID_PATTERN = "^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$"
+
+ PARAMETERS = [
+ Parameter(path="pathParameters/test_id", validators=[Mandatory, RegexValidator(ID_PATTERN)]),
+ Parameter(path="body[json]/name", validators=[MaxLength(255)])
+ ]
+
+ def test_01_extract_from_event_400(self):
+ event = {
+ "pathParameters": {}
+ }
+
+ @extract_from_event(parameters=self.PARAMETERS, group_errors=True, allow_none_defaults=False)
+ def handler(event, context, **kwargs): # noqa
+ return kwargs
+
+ response = handler(event, None)
+ self.assertEqual(HTTPStatus.BAD_REQUEST, response["statusCode"])
+
+ def test_02_extract_from_event_200(self):
+ test_id = str(uuid4())
+
+ event = {
+ "pathParameters": {
+ "test_id": test_id
+ },
+ "body": json.dumps({
+ "name": "Gird"
+ })
+ }
+
+ @extract_from_event(parameters=self.PARAMETERS, group_errors=True, allow_none_defaults=False)
+ def handler(event, context, **kwargs): # noqa
+ return {
+ "statusCode": HTTPStatus.OK,
+ "body": json.dumps(kwargs)
+ }
+
+ expected_body = json.dumps({
+ "test_id": test_id,
+ "name": "Gird"
+ })
+
+ response = handler(event, None)
+
+ self.assertEqual(HTTPStatus.OK, response["statusCode"])
+ self.assertEqual(expected_body, response["body"])
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 2
} | 0.40 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"coverage",
"bandit",
"prospector",
"pylint_quotes",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc",
"pip install -q PyJWT boto3 prospector coverage bandit schema pylint_quotes"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | astroid==3.3.9
-e git+https://github.com/gridsmartercities/aws-lambda-decorators.git@a1e4059f360920d876d8310644b82dbe7a4967db#egg=aws_lambda_decorators
bandit==1.8.3
boto3==1.37.23
botocore==1.37.23
coverage==7.8.0
dill==0.3.9
dodgy==0.2.1
exceptiongroup==1.2.2
flake8==7.2.0
flake8-polyfill==1.0.2
gitdb==4.0.12
GitPython==3.1.44
iniconfig==2.1.0
isort==6.0.1
jmespath==1.0.1
markdown-it-py==3.0.0
mccabe==0.7.0
mdurl==0.1.2
packaging==24.2
pbr==6.1.1
pep8-naming==0.10.0
platformdirs==4.3.7
pluggy==1.5.0
prospector==1.16.1
pycodestyle==2.13.0
pydocstyle==6.3.0
pyflakes==3.3.2
Pygments==2.19.1
PyJWT==2.10.1
pylint==3.3.6
pylint-celery==0.3
pylint-django==2.6.1
pylint-plugin-utils==0.8.2
pylint-quotes==0.2.3
pytest==8.3.5
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requirements-detector==1.3.2
rich==14.0.0
s3transfer==0.11.4
schema==0.7.7
semver==3.0.4
setoptconf-tmp==0.3.1
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
stevedore==5.4.1
toml==0.10.2
tomli==2.2.1
tomlkit==0.13.2
typing_extensions==4.13.0
urllib3==1.26.20
| name: aws-lambda-decorators
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- astroid==3.3.9
- bandit==1.8.3
- boto3==1.37.23
- botocore==1.37.23
- coverage==7.8.0
- dill==0.3.9
- dodgy==0.2.1
- exceptiongroup==1.2.2
- flake8==7.2.0
- flake8-polyfill==1.0.2
- gitdb==4.0.12
- gitpython==3.1.44
- iniconfig==2.1.0
- isort==6.0.1
- jmespath==1.0.1
- markdown-it-py==3.0.0
- mccabe==0.7.0
- mdurl==0.1.2
- packaging==24.2
- pbr==6.1.1
- pep8-naming==0.10.0
- platformdirs==4.3.7
- pluggy==1.5.0
- prospector==1.16.1
- pycodestyle==2.13.0
- pydocstyle==6.3.0
- pyflakes==3.3.2
- pygments==2.19.1
- pyjwt==2.10.1
- pylint==3.3.6
- pylint-celery==0.3
- pylint-django==2.6.1
- pylint-plugin-utils==0.8.2
- pylint-quotes==0.2.3
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- requirements-detector==1.3.2
- rich==14.0.0
- s3transfer==0.11.4
- schema==0.7.7
- semver==3.0.4
- setoptconf-tmp==0.3.1
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- stevedore==5.4.1
- toml==0.10.2
- tomli==2.2.1
- tomlkit==0.13.2
- typing-extensions==4.13.0
- urllib3==1.26.20
prefix: /opt/conda/envs/aws-lambda-decorators
| [
"tests/test_decorators.py::DecoratorsTests::test_extract_from_event_missing_parameter_path",
"tests/test_decorators.py::IsolatedDecoderTests::test_02_extract_from_event_200"
] | [
"tests/test_decorators.py::DecoratorsTests::test_can_get_value_from_dict_with_jwt_by_path"
] | [
"tests/test_decorators.py::DecoratorsTests::test_body_dump_raises_exception_on_invalid_json",
"tests/test_decorators.py::DecoratorsTests::test_body_gets_dumped_as_json",
"tests/test_decorators.py::DecoratorsTests::test_can_add_name_to_parameter",
"tests/test_decorators.py::DecoratorsTests::test_can_get_dict_value_from_dict_by_path",
"tests/test_decorators.py::DecoratorsTests::test_can_get_value_from_dict_by_path",
"tests/test_decorators.py::DecoratorsTests::test_can_get_value_from_dict_with_json_by_path",
"tests/test_decorators.py::DecoratorsTests::test_can_not_add_non_pythonic_var_name_to_parameter",
"tests/test_decorators.py::DecoratorsTests::test_can_not_add_pythonic_keyword_as_name_to_parameter",
"tests/test_decorators.py::DecoratorsTests::test_can_not_validate_non_pythonic_var_name",
"tests/test_decorators.py::DecoratorsTests::test_can_output_custom_error_message_on_validation_failure",
"tests/test_decorators.py::DecoratorsTests::test_cors_adds_correct_headers_only",
"tests/test_decorators.py::DecoratorsTests::test_cors_cannot_decorate_non_dict",
"tests/test_decorators.py::DecoratorsTests::test_cors_invalid_max_age_logs_error",
"tests/test_decorators.py::DecoratorsTests::test_cors_no_headers_in_response",
"tests/test_decorators.py::DecoratorsTests::test_cors_with_headers_a_none_value_does_not_remove_headers",
"tests/test_decorators.py::DecoratorsTests::test_cors_with_headers_an_empty_value_does_not_remove_headers",
"tests/test_decorators.py::DecoratorsTests::test_cors_with_headers_in_response",
"tests/test_decorators.py::DecoratorsTests::test_cors_with_uppercase_headers_in_response",
"tests/test_decorators.py::DecoratorsTests::test_enum_validator_returns_true_when_none_is_passed_in",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_non_numeric_parameter_with_maximum",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_non_numeric_parameter_with_minimum",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_parameter_with_max_length",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_parameter_with_maximum",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_parameter_with_min_length",
"tests/test_decorators.py::DecoratorsTests::test_error_extracting_parameter_with_minimum",
"tests/test_decorators.py::DecoratorsTests::test_exception_handler_raises_exception",
"tests/test_decorators.py::DecoratorsTests::test_exception_handler_raises_exception_with_status_code",
"tests/test_decorators.py::DecoratorsTests::test_exception_handler_raises_exception_without_friendly_message",
"tests/test_decorators.py::DecoratorsTests::test_exit_on_error_false_bundles_all_errors",
"tests/test_decorators.py::DecoratorsTests::test_extract_does_not_raise_an_error_on_missing_optional_key",
"tests/test_decorators.py::DecoratorsTests::test_extract_does_not_raise_an_error_on_valid_regex_key",
"tests/test_decorators.py::DecoratorsTests::test_extract_from_context_calls_function_with_extra_kwargs",
"tests/test_decorators.py::DecoratorsTests::test_extract_from_event_calls_function_with_extra_kwargs",
"tests/test_decorators.py::DecoratorsTests::test_extract_from_event_calls_function_with_extra_kwargs_bool_false",
"tests/test_decorators.py::DecoratorsTests::test_extract_from_event_calls_function_with_extra_kwargs_bool_true",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_length_range",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_max_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_maximum",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_min_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_minimum",
"tests/test_decorators.py::DecoratorsTests::test_extract_mandatory_parameter_with_range",
"tests/test_decorators.py::DecoratorsTests::test_extract_nulls_are_returned",
"tests/test_decorators.py::DecoratorsTests::test_extract_nulls_default_on_decorator_takes_precedence",
"tests/test_decorators.py::DecoratorsTests::test_extract_nulls_preserve_signature_defaults",
"tests/test_decorators.py::DecoratorsTests::test_extract_nulls_raises_exception_when_extracted_from_kwargs_if_allow_none_defaults_is_false",
"tests/test_decorators.py::DecoratorsTests::test_extract_optional_null_parameter_with_max_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_optional_null_parameter_with_maximum",
"tests/test_decorators.py::DecoratorsTests::test_extract_optional_null_parameter_with_min_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_optional_null_parameter_with_minimum",
"tests/test_decorators.py::DecoratorsTests::test_extract_parameter_with_mainimum_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_parameter_with_maximum",
"tests/test_decorators.py::DecoratorsTests::test_extract_parameter_with_maximum_length",
"tests/test_decorators.py::DecoratorsTests::test_extract_parameter_with_minimum",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_empty_path",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_invalid_bool_type",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_invalid_dictionary_schema",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_invalid_float_type",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_invalid_regex_key",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_missing_mandatory_key",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_missing_mandatory_key_with_regex",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_type_error",
"tests/test_decorators.py::DecoratorsTests::test_extract_returns_400_on_value_not_in_list",
"tests/test_decorators.py::DecoratorsTests::test_extract_succeeds_with_valid_type_validation",
"tests/test_decorators.py::DecoratorsTests::test_extract_suceeds_with_valid_enum_validation",
"tests/test_decorators.py::DecoratorsTests::test_extract_valid_dictionary_schema",
"tests/test_decorators.py::DecoratorsTests::test_get_ssm_parameter_empty_key_container_raises_key_error",
"tests/test_decorators.py::DecoratorsTests::test_get_ssm_parameter_missing_parameter_raises_client_error",
"tests/test_decorators.py::DecoratorsTests::test_get_valid_ssm_parameter",
"tests/test_decorators.py::DecoratorsTests::test_get_valid_ssm_parameter_custom_name",
"tests/test_decorators.py::DecoratorsTests::test_get_valid_ssm_parameters",
"tests/test_decorators.py::DecoratorsTests::test_group_errors_true_on_extract_from_context_returns_ok",
"tests/test_decorators.py::DecoratorsTests::test_group_errors_true_on_extract_from_event_returns_ok",
"tests/test_decorators.py::DecoratorsTests::test_group_errors_true_returns_ok",
"tests/test_decorators.py::DecoratorsTests::test_handle_all_exceptions",
"tests/test_decorators.py::DecoratorsTests::test_log_decorator_can_log_params",
"tests/test_decorators.py::DecoratorsTests::test_log_decorator_can_log_response",
"tests/test_decorators.py::DecoratorsTests::test_mandatory_parameter_with_default_returns_error_on_empty",
"tests/test_decorators.py::DecoratorsTests::test_raises_decode_error_convert_json_string_to_dict",
"tests/test_decorators.py::DecoratorsTests::test_response_as_json_invalid_application_does_nothing",
"tests/test_decorators.py::DecoratorsTests::test_type_validator_returns_true_when_none_is_passed_in",
"tests/test_decorators.py::DecoratorsTests::test_validate_does_not_raise_an_error_on_valid_variables",
"tests/test_decorators.py::DecoratorsTests::test_validate_raises_an_error_on_invalid_variables",
"tests/test_decorators.py::DecoratorsTests::test_validate_raises_multiple_errors_on_exit_on_error_false",
"tests/test_decorators.py::DecoratorsTests::test_values_are_stringified_in_max_length_validator",
"tests/test_decorators.py::DecoratorsTests::test_values_are_stringified_in_min_length_validator",
"tests/test_decorators.py::IsolatedDecoderTests::test_01_extract_from_event_400"
] | [] | MIT License | 5,596 | 317 | [
"aws_lambda_decorators/classes.py",
"setup.py"
] |
|
iterative__dvc-2612 | 0bfb3c0ab2ee229762d551bb622258b99e889ceb | 2019-10-15 20:09:56 | 0bfb3c0ab2ee229762d551bb622258b99e889ceb | diff --git a/dvc/logger.py b/dvc/logger.py
index 8e11f0e62..9fab58634 100644
--- a/dvc/logger.py
+++ b/dvc/logger.py
@@ -19,7 +19,7 @@ class LoggingException(Exception):
class ExcludeErrorsFilter(logging.Filter):
def filter(self, record):
- return record.levelno < logging.ERROR
+ return record.levelno < logging.WARNING
class ColorFormatter(logging.Formatter):
@@ -182,7 +182,7 @@ def setup(level=logging.INFO):
},
"console_errors": {
"class": "dvc.logger.LoggerHandler",
- "level": "ERROR",
+ "level": "WARNING",
"formatter": "color",
"stream": "ext://sys.stderr",
},
| Warning messages in stdout
```
$ dvc -V
0.60.1+39104a
$ dvc pipeline show > OUT
$ cat OUT
WARNING: assuming default target 'Dvcfile'.
data/data.xml.dvc
prepare.dvc
featurize.dvc
train.dvc
evaluate.dvc
Dvcfile
```
The warning message `WARNING: assuming default target 'Dvcfile'.` is supposed to be in stderr, not stdout. This seems to work just fine for error messages, but not for warnings.
Many other commands might have the same warning issue. It would be great if this can be fixed holistically. | iterative/dvc | diff --git a/tests/unit/test_logger.py b/tests/unit/test_logger.py
index 57c876b3d..3560018a9 100644
--- a/tests/unit/test_logger.py
+++ b/tests/unit/test_logger.py
@@ -179,3 +179,10 @@ class TestColorFormatter:
logger.info("some info")
captured = capsys.readouterr()
assert captured.out == ""
+
+
+def test_handlers():
+ stdout, stderr = logger.handlers
+
+ assert stdout.level == logging.DEBUG
+ assert stderr.level == logging.WARNING
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 1
} | 0.62 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aliyun-python-sdk-core==2.16.0
aliyun-python-sdk-core-v3==2.13.33
aliyun-python-sdk-kms==2.16.5
appdirs==1.4.4
asciimatics==1.14.0
atpublic==3.1.2
attrs @ file:///croot/attrs_1668696182826/work
autocommand==2.2.2
awscli==1.31.13
azure-common==1.1.28
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
bcrypt==4.2.1
boto==2.49.0
boto3==1.9.115
botocore==1.12.253
cachetools==4.2.4
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
collective.checkdocs==0.2
colorama==0.4.4
configobj==5.0.9
configparser==5.3.0
coverage==7.2.7
crcmod==1.7
cryptography==44.0.2
distro==1.9.0
docutils==0.15.2
-e git+https://github.com/iterative/dvc.git@0bfb3c0ab2ee229762d551bb622258b99e889ceb#egg=dvc
execnet==2.0.2
flake8==5.0.4
flake8-docstrings==1.7.0
flaky==3.8.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
flufl.lock==7.1.1
funcy==2.0
future==1.0.0
gitdb==4.0.12
gitdb2==4.0.2
GitPython==3.1.44
google-api-core==2.10.2
google-auth==1.35.0
google-cloud-core==1.7.3
google-cloud-storage==1.19.0
google-compute-engine==2.8.13
google-crc32c==1.5.0
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grandalf==0.6
humanize==4.6.0
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
inflect==6.0.5
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.2.3
jaraco.collections==4.2.0
jaraco.context==4.3.0
jaraco.functools==3.7.0
jaraco.structures==2.1.0
jaraco.text==3.11.1
jaraco.ui==2.3.0
jaraco.windows==5.7.0
Jinja2==3.1.6
jmespath==0.10.0
jsonpath-ng==1.7.0
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mock-ssh-server==0.9.1
more-itertools==9.1.0
moto==4.2.14
nanotime==0.5.2
networkx==2.6.3
numpy==1.21.6
oss2==2.6.1
packaging @ file:///croot/packaging_1671697413597/work
paramiko==3.5.1
path==16.6.0
pathspec==0.11.2
Pillow==9.5.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
ply==3.11
protobuf==4.24.4
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyarrow==0.14.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycodestyle==2.9.1
pycparser==2.21
pycryptodome==3.22.0
pydantic==1.10.21
pydocstyle==6.3.0
pyfiglet==0.8.post1
pyflakes==2.5.0
Pygments==2.17.2
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.1.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==6.0.1
rangehttpserver==1.4.0
requests==2.31.0
responses==0.23.3
rsa==4.7.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
s3transfer==0.2.1
schema==0.7.7
shortuuid==1.0.13
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
tqdm==4.67.1
treelib==1.7.1
types-PyYAML==6.0.12.12
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3==1.25.11
wcwidth==0.2.13
Werkzeug==2.2.3
xmltodict==0.14.2
zipp @ file:///croot/zipp_1672387121353/work
| name: dvc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- aliyun-python-sdk-core==2.16.0
- aliyun-python-sdk-core-v3==2.13.33
- aliyun-python-sdk-kms==2.16.5
- appdirs==1.4.4
- asciimatics==1.14.0
- atpublic==3.1.2
- autocommand==2.2.2
- awscli==1.31.13
- azure-common==1.1.28
- azure-storage-blob==2.1.0
- azure-storage-common==2.1.0
- bcrypt==4.2.1
- boto==2.49.0
- boto3==1.9.115
- botocore==1.12.253
- cachetools==4.2.4
- cffi==1.15.1
- charset-normalizer==3.4.1
- collective-checkdocs==0.2
- colorama==0.4.4
- configobj==5.0.9
- configparser==5.3.0
- coverage==7.2.7
- crcmod==1.7
- cryptography==44.0.2
- distro==1.9.0
- docutils==0.15.2
- dvc==0.62.1+0bfb3c
- execnet==2.0.2
- flake8==5.0.4
- flake8-docstrings==1.7.0
- flaky==3.8.1
- flufl-lock==7.1.1
- funcy==2.0
- future==1.0.0
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==3.1.44
- google-api-core==2.10.2
- google-auth==1.35.0
- google-cloud-core==1.7.3
- google-cloud-storage==1.19.0
- google-compute-engine==2.8.13
- google-crc32c==1.5.0
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grandalf==0.6
- humanize==4.6.0
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- inflect==6.0.5
- jaraco-classes==3.2.3
- jaraco-collections==4.2.0
- jaraco-context==4.3.0
- jaraco-functools==3.7.0
- jaraco-structures==2.1.0
- jaraco-text==3.11.1
- jaraco-ui==2.3.0
- jaraco-windows==5.7.0
- jinja2==3.1.6
- jmespath==0.10.0
- jsonpath-ng==1.7.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mock-ssh-server==0.9.1
- more-itertools==9.1.0
- moto==4.2.14
- nanotime==0.5.2
- networkx==2.6.3
- numpy==1.21.6
- oss2==2.6.1
- paramiko==3.5.1
- path==16.6.0
- pathspec==0.11.2
- pillow==9.5.0
- ply==3.11
- protobuf==4.24.4
- psutil==7.0.0
- pyarrow==0.14.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycodestyle==2.9.1
- pycparser==2.21
- pycryptodome==3.22.0
- pydantic==1.10.21
- pydocstyle==6.3.0
- pyfiglet==0.8.post1
- pyflakes==2.5.0
- pygments==2.17.2
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.1
- rangehttpserver==1.4.0
- requests==2.31.0
- responses==0.23.3
- rsa==4.7.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- s3transfer==0.2.1
- schema==0.7.7
- shortuuid==1.0.13
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- tqdm==4.67.1
- treelib==1.7.1
- types-pyyaml==6.0.12.12
- urllib3==1.25.11
- wcwidth==0.2.13
- werkzeug==2.2.3
- xmltodict==0.14.2
prefix: /opt/conda/envs/dvc
| [
"tests/unit/test_logger.py::test_handlers"
] | [] | [
"tests/unit/test_logger.py::TestColorFormatter::test_debug",
"tests/unit/test_logger.py::TestColorFormatter::test_info",
"tests/unit/test_logger.py::TestColorFormatter::test_warning",
"tests/unit/test_logger.py::TestColorFormatter::test_error",
"tests/unit/test_logger.py::TestColorFormatter::test_exception",
"tests/unit/test_logger.py::TestColorFormatter::test_exception_with_description_and_without_message",
"tests/unit/test_logger.py::TestColorFormatter::test_exception_with_description_and_message",
"tests/unit/test_logger.py::TestColorFormatter::test_exception_under_verbose",
"tests/unit/test_logger.py::TestColorFormatter::test_nested_exceptions",
"tests/unit/test_logger.py::TestColorFormatter::test_progress_awareness"
] | [] | Apache License 2.0 | 5,599 | 188 | [
"dvc/logger.py"
] |
|
datosgobar__pydatajson-293 | 7ac9d34f7ecfe7b9804eba44626c7ac15606f09a | 2019-10-16 19:06:26 | adb85a7de7dfa073ddf9817a5fe2d125f9ce4e54 | diff --git a/pydatajson/constants.py b/pydatajson/constants.py
index dc49ecb..cbfbd03 100644
--- a/pydatajson/constants.py
+++ b/pydatajson/constants.py
@@ -1,6 +1,7 @@
REQUESTS_TIMEOUT = 30
DEFAULT_TIMEZONE = "America/Buenos_Aires"
-VALID_STATUS_CODES = [200, 203, 302]
+INVALID_STATUS_CODES_REGEX = ["^4[0-9]+$", "^5[0-9]+$"]
+EXCEPTION_STATUS_CODES = [429]
CANT_THREADS_BROKEN_URL_VALIDATOR = 10
diff --git a/pydatajson/helpers.py b/pydatajson/helpers.py
index 638a9a8..dde2dea 100644
--- a/pydatajson/helpers.py
+++ b/pydatajson/helpers.py
@@ -18,13 +18,15 @@ from contextlib import contextmanager
import requests
from openpyxl import load_workbook
-from requests import RequestException
+from requests import RequestException, Timeout
from six.moves.urllib_parse import urlparse
from six import string_types, iteritems
from unidecode import unidecode
-from pydatajson.constants import VALID_STATUS_CODES
+from pydatajson.constants import \
+ INVALID_STATUS_CODES_REGEX, \
+ EXCEPTION_STATUS_CODES
from pydatajson.download import download_to_file
logger = logging.getLogger('pydatajson.helpers')
@@ -571,6 +573,13 @@ def fields_to_uppercase(fields):
def is_working_url(url):
try:
response = requests.head(url, timeout=1)
- return response.status_code in VALID_STATUS_CODES, response.status_code
+ matches = []
+ if response.status_code not in EXCEPTION_STATUS_CODES:
+ matches = \
+ [re.match(pattern, str(response.status_code)) is not None
+ for pattern in INVALID_STATUS_CODES_REGEX]
+ return True not in matches, response.status_code
+ except Timeout:
+ return False, 408
except (RequestException, Exception):
return False, None
| Modificar criterio de análisis de broken URL: es válida a menos que 4xx o 5xx
Excepciones: 429 "Too Many Requests" no se debe considerar broken. | datosgobar/pydatajson | diff --git a/tests/test_helpers.py b/tests/test_helpers.py
index c5edfa6..5a7e91b 100644
--- a/tests/test_helpers.py
+++ b/tests/test_helpers.py
@@ -294,15 +294,23 @@ class HelpersTestCase(unittest.TestCase):
req_mock.head('http://test.com/', status_code=400)
self.assertEqual((False, 400), is_working_url('http://test.com/'))
+ @requests_mock.Mocker()
+ def test_validate_too_many_requests_response(self, req_mock):
+ too_many_request_status_code = 429
+ req_mock.head('http://test.com/',
+ status_code=too_many_request_status_code)
+ self.assertEqual((True, too_many_request_status_code),
+ is_working_url('http://test.com/'))
+
@requests_mock.Mocker()
def test_validate_url_with_exception(self, req_mock):
req_mock.head('http://test.com/', exc=ConnectionError)
self.assertEqual((False, None), is_working_url('http://test.com/'))
@requests_mock.Mocker()
- def validate_url_with_timeout(self, req_mock):
+ def test_validate_url_with_timeout(self, req_mock):
req_mock.head('http://test.com/', exc=Timeout)
- self.assertEqual((False, None), is_working_url('http://test.com/'))
+ self.assertEqual((False, 408), is_working_url('http://test.com/'))
def test_validate_malformed_values(self):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
ckanapi==4.0
docopt==0.6.2
et-xmlfile==1.1.0
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
isodate==0.6.0
jsonschema==2.6.0
nose==1.3.7
openpyxl==3.1.3
packaging==21.3
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/datosgobar/pydatajson.git@7ac9d34f7ecfe7b9804eba44626c7ac15606f09a#egg=pydatajson
pyparsing==3.1.4
pytest==7.0.1
python-dateutil==2.8.0
pytz==2025.2
requests==2.27.1
requests-mock==1.12.1
rfc3987==1.3.7
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
unicodecsv==0.14.1
Unidecode==0.4.21
urllib3==1.26.20
zipp==3.6.0
| name: pydatajson
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- ckanapi==4.0
- docopt==0.6.2
- et-xmlfile==1.1.0
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- isodate==0.6.0
- jsonschema==2.6.0
- nose==1.3.7
- openpyxl==3.1.3
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-dateutil==2.8.0
- pytz==2025.2
- requests==2.27.1
- requests-mock==1.12.1
- rfc3987==1.3.7
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- unicodecsv==0.14.1
- unidecode==0.04.21
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/pydatajson
| [
"tests/test_helpers.py::HelpersTestCase::test_validate_too_many_requests_response",
"tests/test_helpers.py::HelpersTestCase::test_validate_url_with_timeout"
] | [] | [
"tests/test_helpers.py::HelpersTestCase::test_add_dicts",
"tests/test_helpers.py::HelpersTestCase::test_fields_to_uppercase_keeps_uppercase_fields_intact",
"tests/test_helpers.py::HelpersTestCase::test_fields_to_uppercase_modifies_all_lowercase_fields",
"tests/test_helpers.py::HelpersTestCase::test_fields_to_uppercase_modifies_mixed_fields",
"tests/test_helpers.py::HelpersTestCase::test_fields_to_uppercase_returns_unique_uppercase_keys",
"tests/test_helpers.py::HelpersTestCase::test_is_list_of_matching_dicts_with_list_of_not_dicts",
"tests/test_helpers.py::HelpersTestCase::test_is_list_of_matching_dicts_with_matched_dicts",
"tests/test_helpers.py::HelpersTestCase::test_is_list_of_matching_dicts_with_mismatched_dicts",
"tests/test_helpers.py::HelpersTestCase::test_is_list_of_matching_dicts_with_not_list",
"tests/test_helpers.py::HelpersTestCase::test_parse_date_string",
"tests/test_helpers.py::HelpersTestCase::test_parse_repeating_time_interval_to_days",
"tests/test_helpers.py::HelpersTestCase::test_parse_repeating_time_interval_to_str",
"tests/test_helpers.py::HelpersTestCase::test_sheet_to_table",
"tests/test_helpers.py::HelpersTestCase::test_string_to_list_alternative_separator",
"tests/test_helpers.py::HelpersTestCase::test_string_to_list_default_separator",
"tests/test_helpers.py::HelpersTestCase::test_title_to_name",
"tests/test_helpers.py::HelpersTestCase::test_traverse_dict_correct_keys",
"tests/test_helpers.py::HelpersTestCase::test_traverse_dict_index_out_of_range",
"tests/test_helpers.py::HelpersTestCase::test_traverse_dict_missing_key",
"tests/test_helpers.py::HelpersTestCase::test_traverse_dict_string_index_for_list",
"tests/test_helpers.py::HelpersTestCase::test_validate_invalid_url",
"tests/test_helpers.py::HelpersTestCase::test_validate_malformed_values",
"tests/test_helpers.py::HelpersTestCase::test_validate_url_with_exception",
"tests/test_helpers.py::HelpersTestCase::test_validate_valid_url"
] | [] | MIT License | 5,606 | 478 | [
"pydatajson/constants.py",
"pydatajson/helpers.py"
] |
|
peter-wangxu__persist-queue-115 | bff90abfae0011427de552bb0243d521d6fa79ce | 2019-10-17 15:21:05 | bff90abfae0011427de552bb0243d521d6fa79ce | diff --git a/persistqueue/queue.py b/persistqueue/queue.py
index dac8ec6..0a1e301 100644
--- a/persistqueue/queue.py
+++ b/persistqueue/queue.py
@@ -138,6 +138,9 @@ class Queue(object):
def _qsize(self):
return self.info['size']
+ def empty(self):
+ return self.qsize() == 0
+
def put(self, item, block=True, timeout=None):
"Interface for putting item in disk-based queue."
self.not_full.acquire()
diff --git a/persistqueue/sqlackqueue.py b/persistqueue/sqlackqueue.py
index d91c033..45ea0f4 100644
--- a/persistqueue/sqlackqueue.py
+++ b/persistqueue/sqlackqueue.py
@@ -215,6 +215,9 @@ class SQLiteAckQueue(sqlbase.SQLiteBase):
def qsize(self):
return self.size
+ def empty(self):
+ return self.size == 0
+
def __len__(self):
return self.size
diff --git a/persistqueue/sqlqueue.py b/persistqueue/sqlqueue.py
index 1471b04..f01eec7 100644
--- a/persistqueue/sqlqueue.py
+++ b/persistqueue/sqlqueue.py
@@ -115,6 +115,9 @@ class SQLiteQueue(sqlbase.SQLiteBase):
def qsize(self):
return self.size
+ def empty(self):
+ return self.size == 0
+
def __len__(self):
return self.size
| Add #.empty
Hello @peter-wangxu, would you accept a PR adding the #.empty method to your queue implementations?
We could also add #.full but since the SQLite queue does not support `maxsize` I guess it would not make much sense. | peter-wangxu/persist-queue | diff --git a/persistqueue/tests/test_queue.py b/persistqueue/tests/test_queue.py
index a6b69c8..f528ba3 100644
--- a/persistqueue/tests/test_queue.py
+++ b/persistqueue/tests/test_queue.py
@@ -46,6 +46,16 @@ class PersistTest(unittest.TestCase):
q.task_done()
del q
+ def test_empty(self):
+ q = Queue(self.path)
+ self.assertEqual(q.empty(), True)
+
+ q.put('var1')
+ self.assertEqual(q.empty(), False)
+
+ q.get()
+ self.assertEqual(q.empty(), True)
+
@params(*serializer_params)
def test_open_close_1000(self, serializer):
"""Write 1000 items, close, reopen checking if all items are there"""
diff --git a/persistqueue/tests/test_sqlackqueue.py b/persistqueue/tests/test_sqlackqueue.py
index 465a675..2914eec 100644
--- a/persistqueue/tests/test_sqlackqueue.py
+++ b/persistqueue/tests/test_sqlackqueue.py
@@ -35,6 +35,16 @@ class SQLite3AckQueueTest(unittest.TestCase):
# assert with negative timeout
self.assertRaises(ValueError, q.get, block=True, timeout=-1.0)
+ def test_empty(self):
+ q = SQLiteAckQueue(self.path, auto_commit=self.auto_commit)
+ self.assertEqual(q.empty(), True)
+
+ q.put('first')
+ self.assertEqual(q.empty(), False)
+
+ q.get()
+ self.assertEqual(q.empty(), True)
+
def test_open_close_single(self):
"""Write 1 item, close, reopen checking if same item is there"""
diff --git a/persistqueue/tests/test_sqlqueue.py b/persistqueue/tests/test_sqlqueue.py
index a2fdaef..d71d661 100644
--- a/persistqueue/tests/test_sqlqueue.py
+++ b/persistqueue/tests/test_sqlqueue.py
@@ -36,6 +36,16 @@ class SQLite3QueueTest(unittest.TestCase):
self.assertRaises(ValueError, q.get, block=True, timeout=-1.0)
del q
+ def test_empty(self):
+ q = SQLiteQueue(self.path, auto_commit=self.auto_commit)
+ self.assertEqual(q.empty(), True)
+
+ q.put('first')
+ self.assertEqual(q.empty(), False)
+
+ q.get()
+ self.assertEqual(q.empty(), True)
+
def test_open_close_single(self):
"""Write 1 item, close, reopen checking if same item is there"""
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 3
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[extra]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"nose2",
"mock",
"flake8",
"eventlet",
"msgpack",
"coverage",
"cov_core",
"virtualenv",
"pytest"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"extra-requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | cov-core==1.15.0
coverage==7.8.0
distlib==0.3.9
dnspython==2.7.0
eventlet==0.39.1
exceptiongroup==1.2.2
filelock==3.18.0
flake8==7.2.0
greenlet==3.1.1
iniconfig==2.1.0
mccabe==0.7.0
mock==5.2.0
msgpack==1.1.0
nose2==0.15.1
packaging==24.2
-e git+https://github.com/peter-wangxu/persist-queue.git@bff90abfae0011427de552bb0243d521d6fa79ce#egg=persist_queue
platformdirs==4.3.7
pluggy==1.5.0
pycodestyle==2.13.0
pyflakes==3.3.1
pytest==8.3.5
tomli==2.2.1
virtualenv==20.29.3
| name: persist-queue
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- cov-core==1.15.0
- coverage==7.8.0
- distlib==0.3.9
- dnspython==2.7.0
- eventlet==0.39.1
- exceptiongroup==1.2.2
- filelock==3.18.0
- flake8==7.2.0
- greenlet==3.1.1
- iniconfig==2.1.0
- mccabe==0.7.0
- mock==5.2.0
- msgpack==1.1.0
- nose2==0.15.1
- packaging==24.2
- platformdirs==4.3.7
- pluggy==1.5.0
- pycodestyle==2.13.0
- pyflakes==3.3.1
- pytest==8.3.5
- tomli==2.2.1
- virtualenv==20.29.3
prefix: /opt/conda/envs/persist-queue
| [
"persistqueue/tests/test_queue.py::PersistTest::test_empty",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_empty",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_empty"
] | [
"persistqueue/tests/test_queue.py::PersistTest::test_clear_tail_file",
"persistqueue/tests/test_queue.py::PersistTest::test_garbage_on_head",
"persistqueue/tests/test_queue.py::PersistTest::test_get_timeout",
"persistqueue/tests/test_queue.py::PersistTest::test_get_timeout_negative",
"persistqueue/tests/test_queue.py::PersistTest::test_multi_threaded",
"persistqueue/tests/test_queue.py::PersistTest::test_open_close_1000",
"persistqueue/tests/test_queue.py::PersistTest::test_open_close_single",
"persistqueue/tests/test_queue.py::PersistTest::test_partial_write",
"persistqueue/tests/test_queue.py::PersistTest::test_put_block_and_wait",
"persistqueue/tests/test_queue.py::PersistTest::test_put_maxsize_reached",
"persistqueue/tests/test_queue.py::PersistTest::test_put_nowait",
"persistqueue/tests/test_queue.py::PersistTest::test_put_timeout_negative",
"persistqueue/tests/test_queue.py::PersistTest::test_put_timeout_reached",
"persistqueue/tests/test_queue.py::PersistTest::test_random_read_write",
"persistqueue/tests/test_queue.py::PersistTest::test_task_done_too_many_times"
] | [
"persistqueue/tests/test_queue.py::PersistTest::test_protocol",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_ack_and_clear",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_ack_unack_ack_failed",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_ack_unknown_item",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_multi_threaded_multi_producer",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_multi_threaded_parallel",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_multiple_consumers",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_open_close_1000",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_open_close_single",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_protocol_1",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_protocol_2",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_put_0",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_raise_empty",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_random_read_write",
"persistqueue/tests/test_sqlackqueue.py::SQLite3AckQueueTest::test_resume_unack",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_ack_and_clear",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_ack_unack_ack_failed",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_ack_unknown_item",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_protocol_1",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_put_0",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_raise_empty",
"persistqueue/tests/test_sqlackqueue.py::SQLite3QueueInMemory::test_random_read_write",
"persistqueue/tests/test_sqlackqueue.py::FILOSQLite3AckQueueTest::test_open_close_1000",
"persistqueue/tests/test_sqlackqueue.py::SQLite3UniqueAckQueueTest::test_add_duplicate_item",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_json_serializer",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_multi_threaded_multi_producer",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_multi_threaded_parallel",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_multiple_consumers",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_open_close_1000",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_open_close_single",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_protocol_1",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_protocol_2",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_put_0",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_raise_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_random_read_write",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueTest::test_task_done_with_restart",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_json_serializer",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_multi_threaded_multi_producer",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_multi_threaded_parallel",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_open_close_1000",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_open_close_single",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_protocol_1",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_protocol_2",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_put_0",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_raise_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_random_read_write",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueNoAutoCommitTest::test_task_done_with_restart",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_json_serializer",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_protocol_1",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_put_0",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_raise_empty",
"persistqueue/tests/test_sqlqueue.py::SQLite3QueueInMemory::test_random_read_write",
"persistqueue/tests/test_sqlqueue.py::FILOSQLite3QueueTest::test_open_close_1000",
"persistqueue/tests/test_sqlqueue.py::FILOSQLite3QueueNoAutoCommitTest::test_open_close_1000",
"persistqueue/tests/test_sqlqueue.py::SQLite3UniqueQueueTest::test_add_duplicate_item",
"persistqueue/tests/test_sqlqueue.py::SQLite3UniqueQueueTest::test_multiple_consumers",
"persistqueue/tests/test_sqlqueue.py::SQLite3UniqueQueueTest::test_unique_dictionary_serialization_json",
"persistqueue/tests/test_sqlqueue.py::SQLite3UniqueQueueTest::test_unique_dictionary_serialization_msgpack",
"persistqueue/tests/test_sqlqueue.py::SQLite3UniqueQueueTest::test_unique_dictionary_serialization_pickle"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,609 | 383 | [
"persistqueue/queue.py",
"persistqueue/sqlackqueue.py",
"persistqueue/sqlqueue.py"
] |
|
qutech__qupulse-478 | ec96e04357223c6289f02a2dcbe0bee9852573c3 | 2019-10-18 13:20:38 | d83a74aad86f8e9a6c7ef90f458cd5dc4d7589bd | diff --git a/qupulse/_program/_loop.py b/qupulse/_program/_loop.py
index e0a4e6c9..46795839 100644
--- a/qupulse/_program/_loop.py
+++ b/qupulse/_program/_loop.py
@@ -493,17 +493,27 @@ def to_waveform(program: Loop) -> Waveform:
class _CompatibilityLevel(Enum):
compatible = 0
action_required = 1
- incompatible = 2
+ incompatible_too_short = 2
+ incompatible_fraction = 3
+ incompatible_quantum = 4
def _is_compatible(program: Loop, min_len: int, quantum: int, sample_rate: TimeType) -> _CompatibilityLevel:
+ """ check whether program loop is compatible with awg requirements
+ possible reasons for incompatibility:
+ program shorter than minimum length
+ program duration not an integer
+ program duration not a multiple of quantum """
program_duration_in_samples = program.duration * sample_rate
if program_duration_in_samples.denominator != 1:
- return _CompatibilityLevel.incompatible
+ return _CompatibilityLevel.incompatible_fraction
- if program_duration_in_samples < min_len or program_duration_in_samples % quantum > 0:
- return _CompatibilityLevel.incompatible
+ if program_duration_in_samples < min_len:
+ return _CompatibilityLevel.incompatible_too_short
+
+ if program_duration_in_samples % quantum > 0:
+ return _CompatibilityLevel.incompatible_quantum
if program.is_leaf():
waveform_duration_in_samples = program.body_duration * sample_rate
@@ -520,18 +530,19 @@ def _is_compatible(program: Loop, min_len: int, quantum: int, sample_rate: TimeT
def _make_compatible(program: Loop, min_len: int, quantum: int, sample_rate: TimeType) -> None:
-
if program.is_leaf():
program.waveform = to_waveform(program.copy_tree_structure())
program.repetition_count = 1
-
else:
- comp_levels = np.array([_is_compatible(cast(Loop, sub_program), min_len, quantum, sample_rate)
- for sub_program in program])
- incompatible = comp_levels == _CompatibilityLevel.incompatible
- if np.any(incompatible):
+ comp_levels = [_is_compatible(cast(Loop, sub_program), min_len, quantum, sample_rate)
+ for sub_program in program]
+ incompatible = any(comp_level in (_CompatibilityLevel.incompatible_fraction,
+ _CompatibilityLevel.incompatible_quantum,
+ _CompatibilityLevel.incompatible_too_short)
+ for comp_level in comp_levels)
+ if incompatible:
single_run = program.duration * sample_rate / program.repetition_count
- if is_integer(single_run / quantum) and single_run >= min_len:
+ if (single_run / quantum).denominator == 1 and single_run >= min_len:
new_repetition_count = program.repetition_count
program.repetition_count = 1
else:
@@ -547,12 +558,19 @@ def _make_compatible(program: Loop, min_len: int, quantum: int, sample_rate: Tim
def make_compatible(program: Loop, minimal_waveform_length: int, waveform_quantum: int, sample_rate: TimeType):
+ """ check program for compatibility to AWG requirements, make it compatible if necessary and possible"""
+
comp_level = _is_compatible(program,
min_len=minimal_waveform_length,
quantum=waveform_quantum,
sample_rate=sample_rate)
- if comp_level == _CompatibilityLevel.incompatible:
- raise ValueError('The program cannot be made compatible to restrictions')
+ if comp_level == _CompatibilityLevel.incompatible_fraction:
+ raise ValueError('The program duration in samples {} is not an integer'.format(program.duration * sample_rate))
+ if comp_level == _CompatibilityLevel.incompatible_too_short:
+ raise ValueError('The program is too short to be a valid waveform. \n program duration in samples: {} \n minimal length: {}'.format(program.duration * sample_rate, minimal_waveform_length))
+ if comp_level == _CompatibilityLevel.incompatible_quantum:
+ raise ValueError('The program duration in samples {} is not a multiple of quantum {}'.format(program.duration * sample_rate, waveform_quantum))
+
elif comp_level == _CompatibilityLevel.action_required:
_make_compatible(program,
min_len=minimal_waveform_length,
diff --git a/qupulse/hardware/awgs/tektronix.py b/qupulse/hardware/awgs/tektronix.py
index c99b6cdd..77a56d33 100644
--- a/qupulse/hardware/awgs/tektronix.py
+++ b/qupulse/hardware/awgs/tektronix.py
@@ -214,7 +214,7 @@ class TektronixProgram:
offsets=self._offsets)
def get_sequencing_elements(self) -> Sequence[tek_awg.SequenceEntry]:
- """The entries are either of type TekAwh.Waveform or integers which signal an idle waveform of this length"""
+ """The entries are either of type TekAwg.Waveform or integers which signal an idle waveform of this length"""
return self._sequencing_elements
def get_waveforms(self) -> Sequence[Union[tek_awg.Waveform, int]]:
| Improve make_compatible's handling of too short programs
Currently `qupulse.program._loop.make_compatible` throws an error if the program is to short to be a valid waveform. `ValueError: The program cannot be made compatible to restrictions`
1. From the error message this is not obvious to the end user. This needs to be improved.
2. We can easily add optional automatic padding functionality
| qutech/qupulse | diff --git a/tests/_program/loop_tests.py b/tests/_program/loop_tests.py
index 7662e1d8..d016bb2b 100644
--- a/tests/_program/loop_tests.py
+++ b/tests/_program/loop_tests.py
@@ -542,13 +542,14 @@ class ProgramWaveformCompatibilityTest(unittest.TestCase):
wf = DummyWaveform(duration=1.1)
self.assertEqual(_is_compatible(Loop(waveform=wf), min_len=1, quantum=1, sample_rate=time_from_float(1.)),
- _CompatibilityLevel.incompatible)
+ _CompatibilityLevel.incompatible_fraction)
self.assertEqual(_is_compatible(Loop(waveform=wf, repetition_count=10), min_len=20, quantum=1, sample_rate=time_from_float(1.)),
- _CompatibilityLevel.incompatible)
+ _CompatibilityLevel.incompatible_too_short)
self.assertEqual(_is_compatible(Loop(waveform=wf, repetition_count=10), min_len=10, quantum=3, sample_rate=time_from_float(1.)),
- _CompatibilityLevel.incompatible)
+ _CompatibilityLevel.incompatible_quantum)
+
def test_is_compatible_leaf(self):
self.assertEqual(_is_compatible(Loop(waveform=DummyWaveform(duration=1.1), repetition_count=10),
@@ -569,6 +570,29 @@ class ProgramWaveformCompatibilityTest(unittest.TestCase):
self.assertEqual(_is_compatible(program, min_len=1, quantum=1, sample_rate=time_from_float(1.)),
_CompatibilityLevel.action_required)
+ def test_make_compatible_repetition_count(self):
+ wf1 = DummyWaveform(duration=1.5)
+ wf2 = DummyWaveform(duration=2.0)
+
+ program = Loop(children=[Loop(waveform=wf1, repetition_count=2),
+ Loop(waveform=wf2)])
+ duration = program.duration
+ _make_compatible(program, min_len=1, quantum=1, sample_rate=time_from_float(1.))
+ self.assertEqual(program.duration, duration)
+
+ wf2 = DummyWaveform(duration=2.5)
+ program = Loop(children=[Loop(waveform=wf1, repetition_count=3),
+ Loop(waveform=wf2)])
+ duration = program.duration
+ make_compatible(program, minimal_waveform_length=1, waveform_quantum=1, sample_rate=time_from_float(1.))
+ self.assertEqual(program.duration, duration)
+
+ program = Loop(children=[Loop(waveform=wf1, repetition_count=3),
+ Loop(waveform=wf2)], repetition_count=3)
+ duration = program.duration
+ _make_compatible(program, min_len=1, quantum=3, sample_rate=time_from_float(1.))
+ self.assertEqual(program.duration, duration)
+
def test_make_compatible_partial_unroll(self):
wf1 = DummyWaveform(duration=1.5)
wf2 = DummyWaveform(duration=2.0)
@@ -630,8 +654,20 @@ class ProgramWaveformCompatibilityTest(unittest.TestCase):
priv_kwargs = dict(min_len=5, quantum=10, sample_rate=time_from_float(1.))
with mock.patch('qupulse._program._loop._is_compatible',
- return_value=_CompatibilityLevel.incompatible) as mocked:
- with self.assertRaisesRegex(ValueError, 'cannot be made compatible'):
+ return_value=_CompatibilityLevel.incompatible_too_short) as mocked:
+ with self.assertRaisesRegex(ValueError, 'too short'):
+ make_compatible(program, **pub_kwargs)
+ mocked.assert_called_once_with(program, **priv_kwargs)
+
+ with mock.patch('qupulse._program._loop._is_compatible',
+ return_value=_CompatibilityLevel.incompatible_fraction) as mocked:
+ with self.assertRaisesRegex(ValueError, 'not an integer'):
+ make_compatible(program, **pub_kwargs)
+ mocked.assert_called_once_with(program, **priv_kwargs)
+
+ with mock.patch('qupulse._program._loop._is_compatible',
+ return_value=_CompatibilityLevel.incompatible_quantum) as mocked:
+ with self.assertRaisesRegex(ValueError, 'not a multiple of quantum'):
make_compatible(program, **pub_kwargs)
mocked.assert_called_once_with(program, **priv_kwargs)
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip3 install .[plotting,zurich-instruments,tektronix,Faster-fractions]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-xdist pytest-mock pytest-asyncio"
],
"pre_install": [
"apt-get update",
"apt-get install -y libgmp-dev libmpfr-dev libmpc-dev"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
cached-property==1.5.2
certifi==2021.5.30
coverage==6.2
cycler==0.11.0
dataclasses==0.8
execnet==1.9.0
gmpy2==2.1.5
importlib-metadata==4.8.3
iniconfig==1.1.1
kiwisolver==1.3.1
matplotlib==3.3.4
mpmath==1.3.0
numpy==1.19.5
packaging==21.3
Pillow==8.4.0
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
python-dateutil==2.9.0.post0
PyVISA==1.11.3
qupulse @ file:///qupulse
six==1.17.0
sympy==1.9
tek-awg==0.2.1
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: qupulse
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- cached-property==1.5.2
- coverage==6.2
- cycler==0.11.0
- dataclasses==0.8
- execnet==1.9.0
- gmpy2==2.1.5
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- kiwisolver==1.3.1
- matplotlib==3.3.4
- mpmath==1.3.0
- numpy==1.19.5
- packaging==21.3
- pillow==8.4.0
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- python-dateutil==2.9.0.post0
- pyvisa==1.11.3
- qupulse==0.4
- six==1.17.0
- sympy==1.9
- tek-awg==0.2.1
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/qupulse
| [
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_is_compatible_incompatible",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_make_compatible"
] | [] | [
"tests/_program/loop_tests.py::LoopTests::test_cleanup",
"tests/_program/loop_tests.py::LoopTests::test_cleanup_warnings",
"tests/_program/loop_tests.py::LoopTests::test_compare_key",
"tests/_program/loop_tests.py::LoopTests::test_depth",
"tests/_program/loop_tests.py::LoopTests::test_flatten_and_balance",
"tests/_program/loop_tests.py::LoopTests::test_flatten_and_balance_comparison_based",
"tests/_program/loop_tests.py::LoopTests::test_is_balanced",
"tests/_program/loop_tests.py::LoopTests::test_is_leaf",
"tests/_program/loop_tests.py::LoopTests::test_remove_empty_loops",
"tests/_program/loop_tests.py::LoopTests::test_repr",
"tests/_program/loop_tests.py::LoopTests::test_unroll",
"tests/_program/loop_tests.py::MultiChannelTests::test_empty_repj",
"tests/_program/loop_tests.py::MultiChannelTests::test_init",
"tests/_program/loop_tests.py::MultiChannelTests::test_init_from_loop",
"tests/_program/loop_tests.py::MultiChannelTests::test_via_repr",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_is_compatible_leaf",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_is_compatible_node",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_make_compatible_complete_unroll",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_make_compatible_partial_unroll",
"tests/_program/loop_tests.py::ProgramWaveformCompatibilityTest::test_make_compatible_repetition_count"
] | [] | null | 5,612 | 1,209 | [
"qupulse/_program/_loop.py",
"qupulse/hardware/awgs/tektronix.py"
] |
|
pre-commit__pre-commit-1179 | 4bd6529c0521955265bacdbb4d6ef7c2ceec8eba | 2019-10-19 19:37:36 | 4bd6529c0521955265bacdbb4d6ef7c2ceec8eba | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
index dd30c7e..0b1f7b7 100644
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -34,6 +34,12 @@ def filter_by_include_exclude(names, include, exclude):
class Classifier(object):
def __init__(self, filenames):
+ # on windows we normalize all filenames to use forward slashes
+ # this makes it easier to filter using the `files:` regex
+ # this also makes improperly quoted shell-based hooks work better
+ # see #1173
+ if os.altsep == '/' and os.sep == '\\':
+ filenames = (f.replace(os.sep, os.altsep) for f in filenames)
self.filenames = [f for f in filenames if os.path.lexists(f)]
self._types_cache = {}
| Run hooks on files in specific dir, rather than using `--all-files`
I am able to successfully use `pre-commit run --all-files` to run hooks on all my files.
Now, I'm trying to run the hooks just on files in a particular directory. I think I'm not understanding the docs and I can't find an example to work from.
Here's what the docs say:
> `--files [FILES [FILES ...]]`: specific filenames to run hooks on.
I've tried the following variations:
`pre-commit run --files web/modules/custom`
`pre-commit run --files web/modules/custom/*`
`pre-commit run --files [web/modules/custom]`
`pre-commit run --files [web/modules/custom/*]`
`pre-commit run --files [FILES [web/modules/custom]`
`pre-commit run --files [FILES [web/modules/custom/*]`
I feel really dumb having to ask, but can someone please point me in the right direction? | pre-commit/pre-commit | diff --git a/tests/commands/run_test.py b/tests/commands/run_test.py
index f6d5c93..4221134 100644
--- a/tests/commands/run_test.py
+++ b/tests/commands/run_test.py
@@ -7,6 +7,7 @@ import pipes
import subprocess
import sys
+import mock
import pytest
import pre_commit.constants as C
@@ -782,7 +783,7 @@ def test_files_running_subdir(repo_with_passing_hook, tempdir_factory):
'--files', 'foo.py',
tempdir_factory=tempdir_factory,
)
- assert 'subdir/foo.py'.replace('/', os.sep) in stdout
+ assert 'subdir/foo.py' in stdout
@pytest.mark.parametrize(
@@ -826,6 +827,23 @@ def test_classifier_removes_dne():
assert classifier.filenames == []
+def test_classifier_normalizes_filenames_on_windows_to_forward_slashes(tmpdir):
+ with tmpdir.as_cwd():
+ tmpdir.join('a/b/c').ensure()
+ with mock.patch.object(os, 'altsep', '/'):
+ with mock.patch.object(os, 'sep', '\\'):
+ classifier = Classifier((r'a\b\c',))
+ assert classifier.filenames == ['a/b/c']
+
+
+def test_classifier_does_not_normalize_backslashes_non_windows(tmpdir):
+ with mock.patch.object(os.path, 'lexists', return_value=True):
+ with mock.patch.object(os, 'altsep', None):
+ with mock.patch.object(os, 'sep', '/'):
+ classifier = Classifier((r'a/b\c',))
+ assert classifier.filenames == [r'a/b\c']
+
+
@pytest.fixture
def some_filenames():
return (
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 1
} | 1.18 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": null,
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements-dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aspy.yaml==1.3.0
cfgv==3.4.0
coverage==7.8.0
distlib==0.3.9
exceptiongroup==1.2.2
filelock==3.18.0
identify==2.6.9
iniconfig==2.1.0
mock==5.2.0
nodeenv==1.9.1
packaging==24.2
platformdirs==4.3.7
pluggy==1.5.0
-e git+https://github.com/pre-commit/pre-commit.git@4bd6529c0521955265bacdbb4d6ef7c2ceec8eba#egg=pre_commit
pytest==8.3.5
pytest-env==1.1.5
PyYAML==6.0.2
six==1.17.0
toml==0.10.2
tomli==2.2.1
virtualenv==20.29.3
| name: pre-commit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aspy-yaml==1.3.0
- cfgv==3.4.0
- coverage==7.8.0
- distlib==0.3.9
- exceptiongroup==1.2.2
- filelock==3.18.0
- identify==2.6.9
- iniconfig==2.1.0
- mock==5.2.0
- nodeenv==1.9.1
- packaging==24.2
- platformdirs==4.3.7
- pluggy==1.5.0
- pytest==8.3.5
- pytest-env==1.1.5
- pyyaml==6.0.2
- six==1.17.0
- toml==0.10.2
- tomli==2.2.1
- virtualenv==20.29.3
prefix: /opt/conda/envs/pre-commit
| [
"tests/commands/run_test.py::test_classifier_normalizes_filenames_on_windows_to_forward_slashes"
] | [] | [
"tests/commands/run_test.py::test_run[options3-outputs3-1-True]",
"tests/commands/run_test.py::test_run[options4-outputs4-1-True]",
"tests/commands/run_test.py::test_run[options7-outputs7-0-False]",
"tests/commands/run_test.py::test_origin_source_error_msg_error[master-]",
"tests/commands/run_test.py::test_origin_source_error_msg_error[-master]",
"tests/commands/run_test.py::test_origin_source_both_ok",
"tests/commands/run_test.py::test_has_unmerged_paths",
"tests/commands/run_test.py::test_merge_conflict",
"tests/commands/run_test.py::test_merge_conflict_modified",
"tests/commands/run_test.py::test_compute_cols[hooks0-True-80]",
"tests/commands/run_test.py::test_compute_cols[hooks1-False-81]",
"tests/commands/run_test.py::test_compute_cols[hooks2-True-85]",
"tests/commands/run_test.py::test_compute_cols[hooks3-False-82]",
"tests/commands/run_test.py::test_get_skips[environ0-expected_output0]",
"tests/commands/run_test.py::test_get_skips[environ1-expected_output1]",
"tests/commands/run_test.py::test_get_skips[environ2-expected_output2]",
"tests/commands/run_test.py::test_get_skips[environ3-expected_output3]",
"tests/commands/run_test.py::test_get_skips[environ4-expected_output4]",
"tests/commands/run_test.py::test_get_skips[environ5-expected_output5]",
"tests/commands/run_test.py::test_get_skips[environ6-expected_output6]",
"tests/commands/run_test.py::test_skip_hook",
"tests/commands/run_test.py::test_skip_aliased_hook",
"tests/commands/run_test.py::test_hook_id_not_in_non_verbose_output",
"tests/commands/run_test.py::test_hook_id_in_verbose_output",
"tests/commands/run_test.py::test_non_ascii_hook_id",
"tests/commands/run_test.py::test_stdout_write_bug_py26",
"tests/commands/run_test.py::test_lots_of_files",
"tests/commands/run_test.py::test_stages",
"tests/commands/run_test.py::test_pcre_deprecation_warning",
"tests/commands/run_test.py::test_meta_hook_passes",
"tests/commands/run_test.py::test_error_with_unstaged_config",
"tests/commands/run_test.py::test_files_running_subdir",
"tests/commands/run_test.py::test_classifier_removes_dne",
"tests/commands/run_test.py::test_classifier_does_not_normalize_backslashes_non_windows",
"tests/commands/run_test.py::test_include_exclude_base_case",
"tests/commands/run_test.py::test_matches_broken_symlink",
"tests/commands/run_test.py::test_include_exclude_total_match",
"tests/commands/run_test.py::test_include_exclude_does_search_instead_of_match",
"tests/commands/run_test.py::test_include_exclude_exclude_removes_files",
"tests/commands/run_test.py::test_args_hook_only"
] | [] | MIT License | 5,619 | 210 | [
"pre_commit/commands/run.py"
] |
|
locustio__locust-1113 | 40f64e0594d04b883bd68fefa294a4f849d6879c | 2019-10-20 17:52:21 | ba265c5a27676546eb2c9ba8d2cbc975f54c31b7 | diff --git a/locust/runners.py b/locust/runners.py
index 268f7033..97df507f 100644
--- a/locust/runners.py
+++ b/locust/runners.py
@@ -68,6 +68,7 @@ class LocustRunner(object):
"""
bucket = []
weight_sum = sum((locust.weight for locust in self.locust_classes if locust.task_set))
+ residuals = {}
for locust in self.locust_classes:
if not locust.task_set:
warnings.warn("Notice: Found Locust class (%s) got no task_set. Skipping..." % locust.__name__)
@@ -82,6 +83,21 @@ class LocustRunner(object):
percent = locust.weight / float(weight_sum)
num_locusts = int(round(amount * percent))
bucket.extend([locust for x in xrange(0, num_locusts)])
+ # used to keep track of the amount of rounding was done if we need
+ # to add/remove some instances from bucket
+ residuals[locust] = amount * percent - round(amount * percent)
+ if len(bucket) < amount:
+ # We got too few locust classes in the bucket, so we need to create a few extra locusts,
+ # and we do this by iterating over each of the Locust classes - starting with the one
+ # where the residual from the rounding was the largest - and creating one of each until
+ # we get the correct amount
+ for locust in [l for l, r in sorted(residuals.items(), key=lambda x:x[1], reverse=True)][:amount-len(bucket)]:
+ bucket.append(locust)
+ elif len(bucket) > amount:
+ # We've got too many locusts due to rounding errors so we need to remove some
+ for locust in [l for l, r in sorted(residuals.items(), key=lambda x:x[1])][:len(bucket)-amount]:
+ bucket.remove(locust)
+
return bucket
def spawn_locusts(self, spawn_count=None, stop_timeout=None, wait=False):
| Tests fail to run when providing a small number of clients
Hello,
My current setup in locustfile.py currently looks something like this:
```python
class VolumeTests(object):
host = 'http://example.com'
min_wait = 5000
max_wait = 15000
class ExampleTest(VolumeTests, HttpLocust):
task_set = MyTaskSet
```
I have 7 of those ExampleTests at the moment, all different types of tests.
In the command line, when I run `locust -c 3 --no-web`, the testing quits and I get the following message:
"Hatching and swarming 0 clients at the rate 1 clients/s..."
However, when I run `locust -c 4 --no-web`, changing the number of clients from 3 to 4, the tests run successfully.
After some experimenting, it seems like the number of clients you provide must be at least half of the number of tests (i.e. ExampleTest).
Is there something in the locust documentation that I am missing? Or is there something that I am doing wrong? Thanks. | locustio/locust | diff --git a/locust/test/test_runners.py b/locust/test/test_runners.py
index 8a51c75c..14ad50f9 100644
--- a/locust/test/test_runners.py
+++ b/locust/test/test_runners.py
@@ -9,10 +9,12 @@ from locust import events
from locust.core import Locust, TaskSet, task
from locust.exception import LocustError
from locust.rpc import Message
-from locust.runners import LocalLocustRunner, MasterLocustRunner, SlaveNode, STATE_INIT, STATE_HATCHING, STATE_RUNNING, STATE_MISSING
+from locust.runners import LocustRunner, LocalLocustRunner, MasterLocustRunner, SlaveNode, \
+ STATE_INIT, STATE_HATCHING, STATE_RUNNING, STATE_MISSING
from locust.stats import global_stats, RequestStats
from locust.test.testcases import LocustTestCase
+
def mocked_rpc_server():
class MockedRpcServer(object):
queue = Queue()
@@ -43,6 +45,7 @@ def mocked_rpc_server():
return MockedRpcServer
+
class mocked_options(object):
def __init__(self):
self.hatch_rate = 5
@@ -58,6 +61,57 @@ class mocked_options(object):
def reset_stats(self):
pass
+
+class TestLocustRunner(LocustTestCase):
+ def assert_locust_class_distribution(self, expected_distribution, classes):
+ # Construct a {LocustClass => count} dict from a list of locust classes
+ distribution = {}
+ for locust_class in classes:
+ if not locust_class in distribution:
+ distribution[locust_class] = 0
+ distribution[locust_class] += 1
+ expected_str = str({k.__name__:v for k,v in expected_distribution.items()})
+ actual_str = str({k.__name__:v for k,v in distribution.items()})
+ self.assertEqual(
+ expected_distribution,
+ distribution,
+ "Expected a locust class distribution of %s but found %s" % (
+ expected_str,
+ actual_str,
+ ),
+ )
+
+ def test_weight_locusts(self):
+ maxDiff = 2048
+ class BaseLocust(Locust):
+ class task_set(TaskSet): pass
+ class L1(BaseLocust):
+ weight = 101
+ class L2(BaseLocust):
+ weight = 99
+ class L3(BaseLocust):
+ weight = 100
+
+ runner = LocustRunner([L1, L2, L3], mocked_options())
+ self.assert_locust_class_distribution({L1:10, L2:9, L3:10}, runner.weight_locusts(29))
+ self.assert_locust_class_distribution({L1:10, L2:10, L3:10}, runner.weight_locusts(30))
+ self.assert_locust_class_distribution({L1:11, L2:10, L3:10}, runner.weight_locusts(31))
+
+ def test_weight_locusts_fewer_amount_than_locust_classes(self):
+ class BaseLocust(Locust):
+ class task_set(TaskSet): pass
+ class L1(BaseLocust):
+ weight = 101
+ class L2(BaseLocust):
+ weight = 99
+ class L3(BaseLocust):
+ weight = 100
+
+ runner = LocustRunner([L1, L2, L3], mocked_options())
+ self.assertEqual(1, len(runner.weight_locusts(1)))
+ self.assert_locust_class_distribution({L1:1}, runner.weight_locusts(1))
+
+
class TestMasterRunner(LocustTestCase):
def setUp(self):
global_stats.reset_all()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 1
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"codecov",
"flake8",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
codecov==2.1.13
coverage==6.2
dataclasses==0.8
flake8==5.0.4
Flask==2.0.3
gevent==22.10.2
geventhttpclient-wheels==1.3.1.dev3
greenlet==2.0.2
idna==3.10
importlib-metadata==4.2.0
iniconfig==1.1.1
itsdangerous==2.0.1
Jinja2==3.0.3
-e git+https://github.com/locustio/locust.git@40f64e0594d04b883bd68fefa294a4f849d6879c#egg=locustio
MarkupSafe==2.0.1
mccabe==0.7.0
mock==5.2.0
msgpack-python==0.5.6
packaging==21.3
pluggy==1.0.0
py==1.11.0
pycodestyle==2.9.1
pyflakes==2.5.0
pyparsing==3.1.4
pytest==7.0.1
pyzmq==25.1.2
requests==2.27.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
Werkzeug==2.0.3
zipp==3.6.0
zope.event==4.6
zope.interface==5.5.2
| name: locust
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- codecov==2.1.13
- coverage==6.2
- dataclasses==0.8
- flake8==5.0.4
- flask==2.0.3
- gevent==22.10.2
- geventhttpclient-wheels==1.3.1.dev3
- greenlet==2.0.2
- idna==3.10
- importlib-metadata==4.2.0
- iniconfig==1.1.1
- itsdangerous==2.0.1
- jinja2==3.0.3
- markupsafe==2.0.1
- mccabe==0.7.0
- mock==5.2.0
- msgpack-python==0.5.6
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pyparsing==3.1.4
- pytest==7.0.1
- pyzmq==25.1.2
- requests==2.27.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- werkzeug==2.0.3
- zipp==3.6.0
- zope-event==4.6
- zope-interface==5.5.2
prefix: /opt/conda/envs/locust
| [
"locust/test/test_runners.py::TestLocustRunner::test_weight_locusts",
"locust/test/test_runners.py::TestLocustRunner::test_weight_locusts_fewer_amount_than_locust_classes"
] | [] | [
"locust/test/test_runners.py::TestMasterRunner::test_exception_in_task",
"locust/test/test_runners.py::TestMasterRunner::test_exception_is_catched",
"locust/test/test_runners.py::TestMasterRunner::test_master_current_response_times",
"locust/test/test_runners.py::TestMasterRunner::test_master_marks_downed_slaves_as_missing",
"locust/test/test_runners.py::TestMasterRunner::test_master_total_stats",
"locust/test/test_runners.py::TestMasterRunner::test_sends_hatch_data_to_ready_running_hatching_slaves",
"locust/test/test_runners.py::TestMasterRunner::test_slave_connect",
"locust/test/test_runners.py::TestMasterRunner::test_slave_stats_report_median",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_fewer_locusts_than_slaves",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_uneven_locusts",
"locust/test/test_runners.py::TestMasterRunner::test_spawn_zero_locusts",
"locust/test/test_runners.py::TestMessageSerializing::test_message_serialize"
] | [] | MIT License | 5,624 | 485 | [
"locust/runners.py"
] |
|
asottile__pyupgrade-217 | 0599e7b22e0757f8c655b8d93e3585a1a884b160 | 2019-10-20 18:06:27 | 438c83b7e11ff7f3894c25f6c2bee4cb3275a5c7 | diff --git a/pyupgrade.py b/pyupgrade.py
index b451f58..161eeb9 100644
--- a/pyupgrade.py
+++ b/pyupgrade.py
@@ -1390,6 +1390,7 @@ class FindPy3Plus(ast.NodeVisitor):
isinstance(node.func, ast.Name) and
node.func.id == 'next' and
not _starargs(node) and
+ len(node.args) == 1 and
isinstance(node.args[0], ast.Call) and
self._is_six(
node.args[0].func,
| IndexError: list index out of range
Tested with pyupgrade-1.25.0.
Running the following command on the [more-itertools](https://github.com/erikrose/more-itertools) repository results in the following crash. Only occurs with the `--py3-plus` flag.
```
$ pyupgrade --py3-plus more_itertools/recipes.py
Traceback (most recent call last):
File "/home/jdufresne/.venv/more-itertools/bin/pyupgrade", line 10, in <module>
sys.exit(main())
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 2237, in main
ret |= _fix_file(filename, args)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 2199, in _fix_file
contents_text = _fix_py3_plus(contents_text)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1801, in _fix_py3_plus
visitor.visit(ast_obj)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 270, in generic_visit
self.visit(item)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1312, in _visit_sync_func
self._visit_func(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1307, in _visit_func
self.generic_visit(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 270, in generic_visit
self.visit(item)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 270, in generic_visit
self.visit(item)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1353, in visit_Try
self.generic_visit(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 270, in generic_visit
self.visit(item)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1532, in visit_For
self.generic_visit(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 270, in generic_visit
self.visit(item)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 272, in generic_visit
self.visit(value)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1536, in generic_visit
super(FindPy3Plus, self).generic_visit(node)
File "/usr/lib64/python3.7/ast.py", line 272, in generic_visit
self.visit(value)
File "/usr/lib64/python3.7/ast.py", line 262, in visit
return visitor(node)
File "/home/jdufresne/.venv/more-itertools/lib64/python3.7/site-packages/pyupgrade.py", line 1393, in visit_Call
isinstance(node.args[0], ast.Call) and
IndexError: list index out of range
``` | asottile/pyupgrade | diff --git a/tests/six_test.py b/tests/six_test.py
index ae53419..3afe83b 100644
--- a/tests/six_test.py
+++ b/tests/six_test.py
@@ -40,6 +40,8 @@ from pyupgrade import _fix_py3_plus
'(\n'
' six\n'
').text_type(u)\n',
+ # next is shadowed
+ 'next()',
),
)
def test_fix_six_noop(s):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 1
} | 1.25 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
-e git+https://github.com/asottile/pyupgrade.git@0599e7b22e0757f8c655b8d93e3585a1a884b160#egg=pyupgrade
tokenize_rt==6.1.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: pyupgrade
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- tokenize-rt==6.1.0
prefix: /opt/conda/envs/pyupgrade
| [
"tests/six_test.py::test_fix_six_noop[next()]"
] | [] | [
"tests/six_test.py::test_fix_six_noop[x",
"tests/six_test.py::test_fix_six_noop[from",
"tests/six_test.py::test_fix_six_noop[@mydec\\nclass",
"tests/six_test.py::test_fix_six_noop[print(six.raise_from(exc,",
"tests/six_test.py::test_fix_six_noop[print(six.b(\"\\xa3\"))]",
"tests/six_test.py::test_fix_six_noop[print(six.b(",
"tests/six_test.py::test_fix_six_noop[class",
"tests/six_test.py::test_fix_six_noop[six.reraise(*err)]",
"tests/six_test.py::test_fix_six_noop[six.b(*a)]",
"tests/six_test.py::test_fix_six_noop[six.u(*a)]",
"tests/six_test.py::test_fix_six_noop[@six.add_metaclass(*a)\\nclass",
"tests/six_test.py::test_fix_six_noop[(\\n",
"tests/six_test.py::test_fix_six[isinstance(s,",
"tests/six_test.py::test_fix_six[weird",
"tests/six_test.py::test_fix_six[issubclass(tp,",
"tests/six_test.py::test_fix_six[STRING_TYPES",
"tests/six_test.py::test_fix_six[from",
"tests/six_test.py::test_fix_six[six.b(\"123\")-b\"123\"]",
"tests/six_test.py::test_fix_six[six.b(r\"123\")-br\"123\"]",
"tests/six_test.py::test_fix_six[six.b(\"\\\\x12\\\\xef\")-b\"\\\\x12\\\\xef\"]",
"tests/six_test.py::test_fix_six[six.byte2int(b\"f\")-b\"f\"[0]]",
"tests/six_test.py::test_fix_six[@six.python_2_unicode_compatible\\nclass",
"tests/six_test.py::test_fix_six[@six.python_2_unicode_compatible\\n@other_decorator\\nclass",
"tests/six_test.py::test_fix_six[six.get_unbound_method(meth)\\n-meth\\n]",
"tests/six_test.py::test_fix_six[six.indexbytes(bs,",
"tests/six_test.py::test_fix_six[six.assertCountEqual(\\n",
"tests/six_test.py::test_fix_six[six.raise_from(exc,",
"tests/six_test.py::test_fix_six[six.reraise(tp,",
"tests/six_test.py::test_fix_six[six.reraise(\\n",
"tests/six_test.py::test_fix_six[class",
"tests/six_test.py::test_fix_six[basic",
"tests/six_test.py::test_fix_six[add_metaclass,",
"tests/six_test.py::test_fix_six[six.itervalues]",
"tests/six_test.py::test_fix_six[six.itervalues",
"tests/six_test.py::test_fix_base_classes[import",
"tests/six_test.py::test_fix_base_classes[from",
"tests/six_test.py::test_fix_base_classes[class",
"tests/six_test.py::test_fix_base_classes_py3only[class",
"tests/six_test.py::test_fix_base_classes_py3only[from"
] | [] | MIT License | 5,625 | 139 | [
"pyupgrade.py"
] |
|
geopython__pygeoapi-282 | c8d0f1b5128d13306bf2db147faaea120a53ae16 | 2019-10-21 02:41:16 | c8d0f1b5128d13306bf2db147faaea120a53ae16 | diff --git a/pygeoapi/api.py b/pygeoapi/api.py
index 250e21e..1915472 100644
--- a/pygeoapi/api.py
+++ b/pygeoapi/api.py
@@ -997,7 +997,7 @@ def to_json(dict_):
:returns: JSON string representation
"""
- return json.dumps(dict_, default=json_serial)
+ return json.dumps(dict_)
def _render_j2_template(config, template, data):
diff --git a/pygeoapi/util.py b/pygeoapi/util.py
index fe37953..7d8b331 100644
--- a/pygeoapi/util.py
+++ b/pygeoapi/util.py
@@ -2,7 +2,7 @@
#
# Authors: Tom Kralidis <[email protected]>
#
-# Copyright (c) 2018 Tom Kralidis
+# Copyright (c) 2019 Tom Kralidis
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
@@ -32,27 +32,34 @@
from datetime import date, datetime, time
from decimal import Decimal
import logging
+import os
+import re
import yaml
LOGGER = logging.getLogger(__name__)
-def get_url(scheme, host, port, basepath):
+def get_typed_value(value):
"""
- Provides URL of instance
+ Derive true type from data value
- :returns: string of complete baseurl
- """
-
- url = '{}://{}'.format(scheme, host)
+ :param value: value
- if port not in [80, 443]:
- url = '{}:{}'.format(url, port)
+ :returns: value as a native Python data type
+ """
- url = '{}{}'.format(url, basepath)
+ try:
+ if '.' in value: # float?
+ value2 = float(value)
+ elif len(value) > 1 and value.startswith('0'):
+ value2 = value
+ else: # int?
+ value2 = int(value)
+ except ValueError: # string (default)?
+ value2 = value
- return url
+ return value2
def yaml_load(fh):
@@ -64,11 +71,23 @@ def yaml_load(fh):
:returns: `dict` representation of YAML
"""
- try:
- return yaml.load(fh, Loader=yaml.FullLoader)
- except AttributeError as err:
- LOGGER.warning('YAML loading error: {}'.format(err))
- return yaml.load(fh)
+ # support environment variables in config
+ # https://stackoverflow.com/a/55301129
+ path_matcher = re.compile(r'.*\$\{([^}^{]+)\}.*')
+
+ def path_constructor(loader, node):
+ env_var = path_matcher.match(node.value).group(1)
+ if env_var not in os.environ:
+ raise EnvironmentError('Undefined environment variable in config')
+ return get_typed_value(os.path.expandvars(node.value))
+
+ class EnvVarLoader(yaml.SafeLoader):
+ pass
+
+ EnvVarLoader.add_implicit_resolver('!path', path_matcher, None)
+ EnvVarLoader.add_constructor('!path', path_constructor)
+
+ return yaml.load(fh, Loader=EnvVarLoader)
def str2bool(value):
@@ -100,8 +119,7 @@ def json_serial(obj):
"""
if isinstance(obj, (datetime, date, time)):
- serial = obj.isoformat()
- return serial
+ return obj.isoformat()
elif isinstance(obj, Decimal):
return float(obj)
| implement env variables inside PYGEOAPI_CONFIG file
**Is your feature request related to a problem? Please describe.**
devop systems (docker, k8) use enviroment variable to set parameters like database connections of set passwords inside containers.
For example running pygeopai in openshift and we want to connect to a database that has a secret password. Currentely we would have to set the password on the yaml configuration file and this could be a security problem, while openshift can set the secret env inside the pod from a secure source
**Describe the solution you'd like**
Parse the PYGEOAPI_CONFIG yaml file with python module envyaml
(https://github.com/thesimj/envyaml). The current default module yaml should be replaced by envyaml as parser allowing for env variables to be injected into the yaml file
**Describe alternatives you've considered**
The dataprovider would get the env variables by it self, IMHO this is less elegant and we may need the described solutions for other configurations
**Additional context**
Add any other context or screenshots about the feature request here.

| geopython/pygeoapi | diff --git a/tests/pygeoapi-test-config-envvars.yml b/tests/pygeoapi-test-config-envvars.yml
new file mode 100644
index 0000000..717f94f
--- /dev/null
+++ b/tests/pygeoapi-test-config-envvars.yml
@@ -0,0 +1,120 @@
+# =================================================================
+#
+# Authors: Tom Kralidis <[email protected]>
+#
+# Copyright (c) 2019 Tom Kralidis
+#
+# Permission is hereby granted, free of charge, to any person
+# obtaining a copy of this software and associated documentation
+# files (the "Software"), to deal in the Software without
+# restriction, including without limitation the rights to use,
+# copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following
+# conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+# =================================================================
+
+server:
+ bind:
+ host: 0.0.0.0
+ port: ${PYGEOAPI_PORT}
+ url: http://localhost:5000/
+ mimetype: application/json; charset=UTF-8
+ encoding: utf-8
+ language: en-US
+ cors: true
+ pretty_print: true
+ limit: 10
+ # templates: /path/to/templates
+ map:
+ url: https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}.png
+ attribution: '<a href="https://wikimediafoundation.org/wiki/Maps_Terms_of_Use">Wikimedia maps</a> | Map data © <a href="https://openstreetmap.org/copyright">OpenStreetMap contributors</a>'
+
+logging:
+ level: ERROR
+ #logfile: /tmp/pygeoapi.log
+
+metadata:
+ identification:
+ title: pygeoapi default instance ${PYGEOAPI_TITLE}
+ description: pygeoapi provides an API to geospatial data
+ keywords:
+ - geospatial
+ - data
+ - api
+ keywords_type: theme
+ terms_of_service: None
+ url: http://example.org
+ license:
+ name: CC-BY 4.0 license
+ url: https://creativecommons.org/licenses/by/4.0/
+ provider:
+ name: Organization Name
+ url: https://pygeoapi.io
+ contact:
+ name: Lastname, Firstname
+ position: Position Title
+ address: Mailing Address
+ city: City
+ stateorprovince: Administrative Area
+ postalcode: Zip or Postal Code
+ country: Country
+ phone: +xx-xxx-xxx-xxxx
+ fax: +xx-xxx-xxx-xxxx
+ email: [email protected]
+ url: Contact URL
+ hours: Hours of Service
+ instructions: During hours of service. Off on weekends.
+ role: pointOfContact
+
+datasets:
+ obs:
+ title: Observations
+ description: My cool observations
+ keywords:
+ - observations
+ - monitoring
+ links:
+ - type: text/csv
+ rel: canonical
+ title: data
+ href: https://github.com/mapserver/mapserver/blob/branch-7-0/msautotest/wxs/data/obs.csv
+ hreflang: en-US
+ - type: text/csv
+ rel: alternate
+ title: data
+ href: https://raw.githubusercontent.com/mapserver/mapserver/branch-7-0/msautotest/wxs/data/obs.csv
+ hreflang: en-US
+ extents:
+ spatial:
+ bbox: [-180,-90,180,90]
+ crs: http://www.opengis.net/def/crs/OGC/1.3/CRS84
+ temporal:
+ begin: 2000-10-30T18:24:39Z
+ end: 2007-10-30T08:57:29Z
+ trs: http://www.opengis.net/def/uom/ISO-8601/0/Gregorian
+ provider:
+ name: CSV
+ data: tests/data/obs.csv
+ id_field: id
+ geometry:
+ x_field: long
+ y_field: lat
+
+processes:
+ hello-world:
+ processor:
+ name: HelloWorld
diff --git a/tests/test_config.py b/tests/test_config.py
new file mode 100644
index 0000000..7d2b455
--- /dev/null
+++ b/tests/test_config.py
@@ -0,0 +1,62 @@
+# =================================================================
+#
+# Authors: Tom Kralidis <[email protected]>
+#
+# Copyright (c) 2019 Tom Kralidis
+#
+# Permission is hereby granted, free of charge, to any person
+# obtaining a copy of this software and associated documentation
+# files (the "Software"), to deal in the Software without
+# restriction, including without limitation the rights to use,
+# copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following
+# conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+# =================================================================
+
+import os
+
+import pytest
+
+from pygeoapi.util import yaml_load
+
+
+def get_test_file_path(filename):
+ """helper function to open test file safely"""
+
+ if os.path.isfile(filename):
+ return filename
+ else:
+ return 'tests/{}'.format(filename)
+
+
+def test_config_envvars():
+ os.environ['PYGEOAPI_PORT'] = '5001'
+ os.environ['PYGEOAPI_TITLE'] = 'my title'
+
+ with open(get_test_file_path('pygeoapi-test-config-envvars.yml')) as fh:
+ config = yaml_load(fh)
+
+ assert isinstance(config, dict)
+ assert config['server']['bind']['port'] == 5001
+ assert config['metadata']['identification']['title'] == \
+ 'pygeoapi default instance my title'
+
+ os.environ.pop('PYGEOAPI_PORT')
+
+ with pytest.raises(EnvironmentError):
+ with open(get_test_file_path('pygeoapi-test-config-envvars.yml')) as fh: # noqa
+ config = yaml_load(fh)
diff --git a/tests/test_util.py b/tests/test_util.py
new file mode 100644
index 0000000..f4b8aca
--- /dev/null
+++ b/tests/test_util.py
@@ -0,0 +1,98 @@
+# =================================================================
+#
+# Authors: Tom Kralidis <[email protected]>
+#
+# Copyright (c) 2019 Tom Kralidis
+#
+# Permission is hereby granted, free of charge, to any person
+# obtaining a copy of this software and associated documentation
+# files (the "Software"), to deal in the Software without
+# restriction, including without limitation the rights to use,
+# copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following
+# conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+# =================================================================
+
+from datetime import datetime, date, time
+from decimal import Decimal
+import os
+
+import pytest
+
+from pygeoapi import util
+
+
+def get_test_file_path(filename):
+ """helper function to open test file safely"""
+
+ if os.path.isfile(filename):
+ return filename
+ else:
+ return 'tests/{}'.format(filename)
+
+
+def test_get_typed_value():
+ value = util.get_typed_value('2')
+ assert isinstance(value, int)
+
+ value = util.get_typed_value('1.2')
+ assert isinstance(value, float)
+
+ value = util.get_typed_value('1.c2')
+ assert isinstance(value, str)
+
+
+def test_yaml_load():
+ with open(get_test_file_path('pygeoapi-test-config.yml')) as fh:
+ d = util.yaml_load(fh)
+ assert isinstance(d, dict)
+ with pytest.raises(FileNotFoundError):
+ with open(get_test_file_path('404.yml')) as fh:
+ d = util.yaml_load(fh)
+
+
+def test_str2bool():
+ assert util.str2bool(False) is False
+ assert util.str2bool('0') is False
+ assert util.str2bool('no') is False
+ assert util.str2bool('yes') is True
+ assert util.str2bool('1') is True
+ assert util.str2bool(True) is True
+ assert util.str2bool('true') is True
+ assert util.str2bool('True') is True
+ assert util.str2bool('TRUE') is True
+ assert util.str2bool('tRuE') is True
+ assert util.str2bool('on') is True
+ assert util.str2bool('On') is True
+ assert util.str2bool('off') is False
+
+
+def test_json_serial():
+ d = datetime(1972, 10, 30)
+ assert util.json_serial(d) == '1972-10-30T00:00:00'
+
+ d = date(2010, 7, 31)
+ assert util.json_serial(d) == '2010-07-31'
+
+ d = time(11)
+ assert util.json_serial(d) == '11:00:00'
+
+ d = Decimal(1.0)
+ assert util.json_serial(d) == 1.0
+
+ with pytest.raises(TypeError):
+ util.json_serial('foo')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_media",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 2
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt",
"requirements-dev.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
Babel==2.14.0
bleach==6.0.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
click==8.1.8
coverage==7.2.7
cryptography==44.0.2
docutils==0.17.1
exceptiongroup==1.2.2
flake8==5.0.4
Flask==2.2.5
Flask-Cors==5.0.0
idna==3.10
imagesize==1.4.1
importlib-metadata==4.2.0
iniconfig==2.0.0
itsdangerous==2.1.2
jaraco.classes==3.2.3
jeepney==0.9.0
Jinja2==3.1.6
keyring==23.9.3
markdown-it-py==2.2.0
MarkupSafe==2.1.5
mccabe==0.7.0
mdurl==0.1.2
more-itertools==9.1.0
ndg-httpsclient==0.5.1
packaging==24.0
pkginfo==1.10.0
pluggy==1.2.0
pyasn1==0.5.1
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
-e git+https://github.com/geopython/pygeoapi.git@c8d0f1b5128d13306bf2db147faaea120a53ae16#egg=pygeoapi
Pygments==2.17.2
pyOpenSSL==25.0.0
pytest==7.4.4
pytest-cov==4.1.0
pytest-env==1.0.1
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.1
readme-renderer==37.3
requests==2.31.0
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==13.8.1
SecretStorage==3.3.3
six==1.17.0
snowballstemmer==2.2.0
Sphinx==4.3.2
sphinx-rtd-theme==1.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tomli==2.0.1
twine==4.0.2
typing_extensions==4.7.1
unicodecsv==0.14.1
urllib3==2.0.7
webencodings==0.5.1
Werkzeug==2.2.3
zipp==3.15.0
| name: pygeoapi
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- babel==2.14.0
- bleach==6.0.0
- cffi==1.15.1
- charset-normalizer==3.4.1
- click==8.1.8
- coverage==7.2.7
- cryptography==44.0.2
- docutils==0.17.1
- exceptiongroup==1.2.2
- flake8==5.0.4
- flask==2.2.5
- flask-cors==5.0.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.2.0
- iniconfig==2.0.0
- itsdangerous==2.1.2
- jaraco-classes==3.2.3
- jeepney==0.9.0
- jinja2==3.1.6
- keyring==23.9.3
- markdown-it-py==2.2.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mdurl==0.1.2
- more-itertools==9.1.0
- ndg-httpsclient==0.5.1
- packaging==24.0
- pkginfo==1.10.0
- pluggy==1.2.0
- pyasn1==0.5.1
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pygments==2.17.2
- pyopenssl==25.0.0
- pytest==7.4.4
- pytest-cov==4.1.0
- pytest-env==1.0.1
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- readme-renderer==37.3
- requests==2.31.0
- requests-toolbelt==1.0.0
- rfc3986==2.0.0
- rich==13.8.1
- secretstorage==3.3.3
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinx-rtd-theme==1.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jquery==4.1
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==2.0.1
- twine==4.0.2
- typing-extensions==4.7.1
- unicodecsv==0.14.1
- urllib3==2.0.7
- webencodings==0.5.1
- werkzeug==2.2.3
- zipp==3.15.0
prefix: /opt/conda/envs/pygeoapi
| [
"tests/test_config.py::test_config_envvars",
"tests/test_util.py::test_get_typed_value"
] | [] | [
"tests/test_util.py::test_yaml_load",
"tests/test_util.py::test_str2bool",
"tests/test_util.py::test_json_serial"
] | [] | MIT License | 5,629 | 862 | [
"pygeoapi/api.py",
"pygeoapi/util.py"
] |
|
DataDog__dd-trace-py-1106 | d25109514bf4488a5e7f3042a960f92122fc8f1e | 2019-10-21 17:16:24 | 63b0463482dfad78f7801ef784d709950ff3cea1 | JBKahn: Strange that the tests passed and it also exited with an error code.
brettlangdon: Yeah, there is something going on with our Django tests... but only in CircleCI it seems right now, can sort of ignore that failure :/
JBKahn: 😢 well it otherwise looks pretty good | diff --git a/ddtrace/internal/writer.py b/ddtrace/internal/writer.py
index 64dbe15dd..7ee814c52 100644
--- a/ddtrace/internal/writer.py
+++ b/ddtrace/internal/writer.py
@@ -1,7 +1,6 @@
# stdlib
import itertools
import random
-import os
import time
from .. import api
@@ -33,7 +32,7 @@ class AgentWriter(_worker.PeriodicWorkerThread):
super(AgentWriter, self).__init__(interval=self.QUEUE_PROCESSING_INTERVAL,
exit_timeout=shutdown_timeout,
name=self.__class__.__name__)
- self._reset_queue()
+ self._trace_queue = Q(maxsize=MAX_TRACES)
self._filters = filters
self._priority_sampler = priority_sampler
self._last_error_ts = 0
@@ -43,6 +42,23 @@ class AgentWriter(_worker.PeriodicWorkerThread):
self._stats_rate_counter = 0
self.start()
+ def recreate(self):
+ """ Create a new instance of :class:`AgentWriter` using the same settings from this instance
+
+ :rtype: :class:`AgentWriter`
+ :returns: A new :class:`AgentWriter` instance
+ """
+ return self.__class__(
+ hostname=self.api.hostname,
+ port=self.api.port,
+ uds_path=self.api.uds_path,
+ https=self.api.https,
+ shutdown_timeout=self.exit_timeout,
+ filters=self._filters,
+ priority_sampler=self._priority_sampler,
+ dogstatsd=self.dogstatsd,
+ )
+
def _send_stats(self):
"""Determine if we're sending stats or not.
@@ -62,18 +78,7 @@ class AgentWriter(_worker.PeriodicWorkerThread):
return False
- def _reset_queue(self):
- self._pid = os.getpid()
- self._trace_queue = Q(maxsize=MAX_TRACES)
-
def write(self, spans=None, services=None):
- # if this queue was created in a different process (i.e. this was
- # forked) reset everything so that we can safely work from it.
- pid = os.getpid()
- if self._pid != pid:
- log.debug('resetting queues. pids(old:%s new:%s)', self._pid, pid)
- self._reset_queue()
-
if spans:
self._trace_queue.put(spans)
diff --git a/ddtrace/tracer.py b/ddtrace/tracer.py
index c74f62dd9..0ffed1a6f 100644
--- a/ddtrace/tracer.py
+++ b/ddtrace/tracer.py
@@ -378,8 +378,7 @@ class Tracer(object):
context.add_span(span)
# check for new process if runtime metrics worker has already been started
- if self._runtime_worker:
- self._check_new_process()
+ self._check_new_process()
# update set of services handled by tracer
if service and service not in self._services:
@@ -427,6 +426,9 @@ class Tracer(object):
# and generated a new runtime id
self._update_dogstatsd_constant_tags()
+ # Re-create the background writer thread
+ self.writer = self.writer.recreate()
+
def trace(self, name, service=None, resource=None, span_type=None):
"""
Return a span that will trace an operation called `name`. The context that created
| Django API APM traces disappeared after upgrade from 0.29.0 to 0.30.0
In talking with @brettlangdon we couldn't find a quick solution, but downgrading the library version brought them back. Interestingly enough, even the custom traces we were using within the API disappeared but all the ones that exist outside of Django i.e. workers, are all fine on version 0.30.0 | DataDog/dd-trace-py | diff --git a/tests/test_tracer.py b/tests/test_tracer.py
index 04f72f1d4..ee9e9f6d6 100644
--- a/tests/test_tracer.py
+++ b/tests/test_tracer.py
@@ -1,7 +1,8 @@
"""
tests for Tracer and utilities.
"""
-
+import contextlib
+import multiprocessing
from os import getpid
import sys
import warnings
@@ -588,3 +589,63 @@ def test_tracer_dogstatsd_url():
with pytest.raises(ValueError) as e:
t = ddtrace.Tracer(dogstatsd_url='foo://foobar:12')
assert str(e) == 'Unknown url format for `foo://foobar:12`'
+
+
+def test_tracer_fork():
+ t = ddtrace.Tracer()
+ original_pid = t._pid
+ original_writer = t.writer
+
+ @contextlib.contextmanager
+ def capture_failures(errors):
+ try:
+ yield
+ except AssertionError as e:
+ errors.put(e)
+
+ def task(t, errors):
+ # Start a new span to trigger process checking
+ with t.trace('test', service='test') as span:
+
+ # Assert we recreated the writer and have a new queue
+ with capture_failures(errors):
+ assert t._pid != original_pid
+ assert t.writer != original_writer
+ assert t.writer._trace_queue != original_writer._trace_queue
+
+ # Stop the background worker so we don't accidetnally flush the
+ # queue before we can assert on it
+ t.writer.stop()
+ t.writer.join()
+
+ # Assert the trace got written into the correct queue
+ assert original_writer._trace_queue.qsize() == 0
+ assert t.writer._trace_queue.qsize() == 1
+ assert [[span]] == list(t.writer._trace_queue.get())
+
+ # Assert tracer in a new process correctly recreates the writer
+ errors = multiprocessing.Queue()
+ p = multiprocessing.Process(target=task, args=(t, errors))
+ try:
+ p.start()
+ finally:
+ p.join(timeout=2)
+
+ while errors.qsize() > 0:
+ raise errors.get()
+
+ # Ensure writing into the tracer in this process still works as expected
+ with t.trace('test', service='test') as span:
+ assert t._pid == original_pid
+ assert t.writer == original_writer
+ assert t.writer._trace_queue == original_writer._trace_queue
+
+ # Stop the background worker so we don't accidentally flush the
+ # queue before we can assert on it
+ t.writer.stop()
+ t.writer.join()
+
+ # Assert the trace got written into the correct queue
+ assert original_writer._trace_queue.qsize() == 1
+ assert t.writer._trace_queue.qsize() == 1
+ assert [[span]] == list(t.writer._trace_queue.get())
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 2
} | 0.30 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-mock",
"opentracing",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
-e git+https://github.com/DataDog/dd-trace-py.git@d25109514bf4488a5e7f3042a960f92122fc8f1e#egg=ddtrace
exceptiongroup==1.2.2
iniconfig==2.1.0
mock==5.2.0
opentracing==2.4.0
packaging==24.2
pluggy==1.5.0
psutil==7.0.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
tomli==2.2.1
| name: dd-trace-py
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- mock==5.2.0
- opentracing==2.4.0
- packaging==24.2
- pluggy==1.5.0
- psutil==7.0.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- tomli==2.2.1
prefix: /opt/conda/envs/dd-trace-py
| [
"tests/test_tracer.py::test_tracer_fork"
] | [] | [
"tests/test_tracer.py::TracerTestCase::test_adding_services",
"tests/test_tracer.py::TracerTestCase::test_configure_dogstatsd_host",
"tests/test_tracer.py::TracerTestCase::test_configure_dogstatsd_host_port",
"tests/test_tracer.py::TracerTestCase::test_configure_dogstatsd_url_host_port",
"tests/test_tracer.py::TracerTestCase::test_configure_dogstatsd_url_socket",
"tests/test_tracer.py::TracerTestCase::test_configure_runtime_worker",
"tests/test_tracer.py::TracerTestCase::test_default_provider_get",
"tests/test_tracer.py::TracerTestCase::test_default_provider_set",
"tests/test_tracer.py::TracerTestCase::test_default_provider_trace",
"tests/test_tracer.py::TracerTestCase::test_global_context",
"tests/test_tracer.py::TracerTestCase::test_only_root_span_runtime",
"tests/test_tracer.py::TracerTestCase::test_span_no_runtime_tags",
"tests/test_tracer.py::TracerTestCase::test_start_child_from_context",
"tests/test_tracer.py::TracerTestCase::test_start_child_span",
"tests/test_tracer.py::TracerTestCase::test_start_child_span_attributes",
"tests/test_tracer.py::TracerTestCase::test_start_span",
"tests/test_tracer.py::TracerTestCase::test_start_span_optional",
"tests/test_tracer.py::TracerTestCase::test_tracer",
"tests/test_tracer.py::TracerTestCase::test_tracer_current_root_span_missing_context",
"tests/test_tracer.py::TracerTestCase::test_tracer_current_span",
"tests/test_tracer.py::TracerTestCase::test_tracer_current_span_missing_context",
"tests/test_tracer.py::TracerTestCase::test_tracer_disabled",
"tests/test_tracer.py::TracerTestCase::test_tracer_disabled_mem_leak",
"tests/test_tracer.py::TracerTestCase::test_tracer_global_tags",
"tests/test_tracer.py::TracerTestCase::test_tracer_pid",
"tests/test_tracer.py::TracerTestCase::test_tracer_vars",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_class",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_default_name",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_exception",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_factory",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_factory_nested",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_multiple_calls",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_span_nesting",
"tests/test_tracer.py::TracerTestCase::test_tracer_wrap_span_nesting_current_root_span",
"tests/test_tracer.py::test_installed_excepthook",
"tests/test_tracer.py::test_excepthook",
"tests/test_tracer.py::test_tracer_url",
"tests/test_tracer.py::test_tracer_dogstatsd_url"
] | [] | Apache 2.0 or BSD3 | 5,632 | 807 | [
"ddtrace/internal/writer.py",
"ddtrace/tracer.py"
] |
uccser__verto-398 | d748d104310ff8a38208f0acb34b3252070ac180 | 2019-10-23 00:51:48 | d748d104310ff8a38208f0acb34b3252070ac180 | diff --git a/verto/Verto.py b/verto/Verto.py
index a902765..b02acb3 100644
--- a/verto/Verto.py
+++ b/verto/Verto.py
@@ -31,7 +31,7 @@ class Verto(object):
to HTML.
'''
- def __init__(self, processors=DEFAULT_PROCESSORS, html_templates={}, extensions=[], custom_settings={}):
+ def __init__(self, processors=DEFAULT_PROCESSORS, html_templates={}, extensions=[], settings={}):
'''Creates a Verto object.
Args:
@@ -45,12 +45,12 @@ class Verto(object):
eg: {'image': '<img src={{ source }}>'}
extensions: A list of extra extensions to run on the
markdown package.
- custom_settings: A dictionary of settings to override default settings.
+ settings: A dictionary of settings to override default settings.
'''
self.processors = set(processors)
self.html_templates = dict(html_templates)
self.extensions = list(extensions)
- self.custom_settings = custom_settings
+ self.settings = settings
self.create_converter()
def create_converter(self):
@@ -59,7 +59,7 @@ class Verto(object):
processors=self.processors,
html_templates=self.html_templates,
extensions=self.extensions,
- custom_settings=self.custom_settings,
+ settings=self.settings,
)
all_extensions = self.extensions + [self.verto_extension]
self.converter = markdown.Markdown(extensions=all_extensions)
diff --git a/verto/VertoExtension.py b/verto/VertoExtension.py
index d15dada..f7f8abc 100644
--- a/verto/VertoExtension.py
+++ b/verto/VertoExtension.py
@@ -50,7 +50,7 @@ class VertoExtension(Extension):
the Verto converter.
'''
- def __init__(self, processors=[], html_templates={}, extensions=[], custom_settings={}, *args, **kwargs):
+ def __init__(self, processors=[], html_templates={}, extensions=[], settings={}, *args, **kwargs):
'''
Args:
processors: A set of processor names given as strings for which
@@ -62,12 +62,12 @@ class VertoExtension(Extension):
as values.
eg: {'image': '<img src={{ source }}>'}
extensions: A list of extra extensions for compatibility.
- custom_settings: A dictionary of user settings to override defaults.
+ settings: A dictionary of user settings to override defaults.
'''
super().__init__(*args, **kwargs)
self.jinja_templates = self.loadJinjaTemplates(html_templates)
self.processors = processors
- self.settings = self.get_settings(custom_settings)
+ self.settings = self.get_settings(settings)
self.processor_info = self.loadProcessorInfo()
self.title = None
self.heading_tree = None
| Settings parameter is actually 'custom_settings'
The documentation states that the parameter for custom settings is `settings`, but it is actually `custom_settings`. It would be best to change it to use `settings` to be in line with other parameters (we use `html_templates` instead of `custom_html_templates`). | uccser/verto | diff --git a/verto/tests/BlockquoteTest.py b/verto/tests/BlockquoteTest.py
index 6a03cbd..2cf4f2a 100644
--- a/verto/tests/BlockquoteTest.py
+++ b/verto/tests/BlockquoteTest.py
@@ -187,7 +187,7 @@ class BlockquoteTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'source_true.md')
@@ -211,7 +211,7 @@ class BlockquoteTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'source_true_missing_argument.md')
@@ -233,7 +233,7 @@ class BlockquoteTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alignment_true.md')
@@ -257,7 +257,7 @@ class BlockquoteTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alignment_true_missing_argument.md')
diff --git a/verto/tests/BoxedTextTest.py b/verto/tests/BoxedTextTest.py
index 5fb2556..f8b22c8 100644
--- a/verto/tests/BoxedTextTest.py
+++ b/verto/tests/BoxedTextTest.py
@@ -132,7 +132,7 @@ class BoxedTextTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'indented_required.md')
@@ -156,7 +156,7 @@ class BoxedTextTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'type_required.md')
@@ -181,7 +181,7 @@ class BoxedTextTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'indented_and_type_required.md')
@@ -205,7 +205,7 @@ class BoxedTextTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'indented_required_not_provided.md')
@@ -228,7 +228,7 @@ class BoxedTextTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'indented_and_type_required_type_not_provided.md')
diff --git a/verto/tests/ButtonLinkTest.py b/verto/tests/ButtonLinkTest.py
index 3b8302c..1fd1f6c 100644
--- a/verto/tests/ButtonLinkTest.py
+++ b/verto/tests/ButtonLinkTest.py
@@ -97,7 +97,7 @@ class ButtonLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'link_false.md')
@@ -121,7 +121,7 @@ class ButtonLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'text_false.md')
@@ -145,7 +145,7 @@ class ButtonLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'file_true.md')
@@ -170,7 +170,7 @@ class ButtonLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'text_false_file_true.md')
@@ -194,7 +194,7 @@ class ButtonLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'file_true_not_provided.md')
diff --git a/verto/tests/ConfigurationTest.py b/verto/tests/ConfigurationTest.py
index 53e89f1..43092c4 100644
--- a/verto/tests/ConfigurationTest.py
+++ b/verto/tests/ConfigurationTest.py
@@ -376,7 +376,7 @@ class ConfigurationTest(BaseTest):
}
}
}
- verto = Verto(custom_settings=settings)
+ verto = Verto(settings=settings)
self.assertEqual(
verto.verto_extension.settings['processor_argument_overrides'],
dict(settings['processor_argument_overrides'])
@@ -395,7 +395,7 @@ class ConfigurationTest(BaseTest):
},
}
}
- verto = Verto(custom_settings=settings)
+ verto = Verto(settings=settings)
self.assertEqual(
verto.verto_extension.settings['processor_argument_overrides'],
dict(settings['processor_argument_overrides'])
@@ -419,7 +419,7 @@ class ConfigurationTest(BaseTest):
}
}
processors = {'image-tag', 'panel', 'comment'}
- verto = VertoExtension(processors=processors, custom_settings=settings)
+ verto = VertoExtension(processors=processors, settings=settings)
test_string = self.read_test_file(self.test_name, 'custom_argument_rules_multiple_tags_error.md')
self.assertRaises(ArgumentMissingError, lambda x: markdown.markdown(x, extensions=[verto]), test_string)
@@ -441,7 +441,7 @@ class ConfigurationTest(BaseTest):
self.assertRaises(
CustomArgumentRulesError,
- lambda: VertoExtension(processors=processors, custom_settings=settings)
+ lambda: VertoExtension(processors=processors, settings=settings)
)
def test_custom_argument_rules_incorrect_processor_argument_error(self):
@@ -461,5 +461,5 @@ class ConfigurationTest(BaseTest):
self.assertRaises(
CustomArgumentRulesError,
- lambda: VertoExtension(processors=processors, custom_settings=settings)
+ lambda: VertoExtension(processors=processors, settings=settings)
)
diff --git a/verto/tests/FrameTest.py b/verto/tests/FrameTest.py
index 561eb41..3ee70f1 100644
--- a/verto/tests/FrameTest.py
+++ b/verto/tests/FrameTest.py
@@ -60,7 +60,7 @@ class FrameTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'link_false.md')
diff --git a/verto/tests/GlossaryLinkTest.py b/verto/tests/GlossaryLinkTest.py
index 740228b..32285ed 100644
--- a/verto/tests/GlossaryLinkTest.py
+++ b/verto/tests/GlossaryLinkTest.py
@@ -207,7 +207,7 @@ class GlossaryLinkTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'reference_text_true_not_provided.md')
diff --git a/verto/tests/ImageContainerTest.py b/verto/tests/ImageContainerTest.py
index e61c784..b349bee 100644
--- a/verto/tests/ImageContainerTest.py
+++ b/verto/tests/ImageContainerTest.py
@@ -460,7 +460,7 @@ class ImageContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false.md')
@@ -490,7 +490,7 @@ class ImageContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'hover_true.md')
@@ -521,7 +521,7 @@ class ImageContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false_source_true.md')
@@ -551,7 +551,7 @@ class ImageContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'hover_true_not_provided.md')
diff --git a/verto/tests/ImageInlineTest.py b/verto/tests/ImageInlineTest.py
index 827b649..3aadf57 100644
--- a/verto/tests/ImageInlineTest.py
+++ b/verto/tests/ImageInlineTest.py
@@ -333,7 +333,7 @@ class ImageInlineTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false.md')
@@ -362,7 +362,7 @@ class ImageInlineTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
@@ -393,7 +393,7 @@ class ImageInlineTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false_source_true.md')
@@ -422,7 +422,7 @@ class ImageInlineTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'hover_true_not_provided.md')
diff --git a/verto/tests/ImageTagTest.py b/verto/tests/ImageTagTest.py
index ed0deb6..00cc9c3 100644
--- a/verto/tests/ImageTagTest.py
+++ b/verto/tests/ImageTagTest.py
@@ -401,7 +401,7 @@ class ImageTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false.md')
@@ -431,7 +431,7 @@ class ImageTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'hover_true.md')
@@ -462,7 +462,7 @@ class ImageTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'alt_false_source_true.md')
@@ -493,7 +493,7 @@ class ImageTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'hover_true_not_provided.md')
diff --git a/verto/tests/InteractiveContainerTest.py b/verto/tests/InteractiveContainerTest.py
index f601afa..78f69fd 100644
--- a/verto/tests/InteractiveContainerTest.py
+++ b/verto/tests/InteractiveContainerTest.py
@@ -327,7 +327,7 @@ class InteractiveContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'parameters_true.md')
@@ -351,7 +351,7 @@ class InteractiveContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'thumbnail_true.md')
@@ -375,7 +375,7 @@ class InteractiveContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'text_true_not_provided.md')
@@ -398,7 +398,7 @@ class InteractiveContainerTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'parameters_and_thumbnail_true.md')
@@ -426,7 +426,7 @@ class InteractiveContainerTest(ProcessorTest):
'''Test the thumbnail for a whole page interactive is not required when overriden.'''
verto_extension_default_thumbnail_override = VertoExtension(
processors=[self.processor_name],
- custom_settings={
+ settings={
'add_default_interactive_thumbnails_to_required_files': False}
)
test_string = self.read_test_file(self.processor_name, 'whole_page_without_thumbnail_parameter.md')
@@ -452,7 +452,7 @@ class InteractiveContainerTest(ProcessorTest):
'''Test the custom thumbnail for a whole page interactive is not required when overriden.'''
verto_extension_custom_thumbnail_override = VertoExtension(
processors=[self.processor_name],
- custom_settings={
+ settings={
'add_custom_interactive_thumbnails_to_required_files': False}
)
test_string = self.read_test_file(self.processor_name, 'whole_page_with_thumbnail_parameter.md')
diff --git a/verto/tests/InteractiveTagTest.py b/verto/tests/InteractiveTagTest.py
index fc5e5b5..2fd2cbd 100644
--- a/verto/tests/InteractiveTagTest.py
+++ b/verto/tests/InteractiveTagTest.py
@@ -145,7 +145,7 @@ class InteractiveTagTest(ProcessorTest):
'''Test the thumbnail for a whole page interactive is not required when overriden.'''
verto_extension_default_thumbnail_override = VertoExtension(
processors=[self.processor_name],
- custom_settings={'add_default_interactive_thumbnails_to_required_files': False}
+ settings={'add_default_interactive_thumbnails_to_required_files': False}
)
test_string = self.read_test_file(self.processor_name, 'whole_page_without_thumbnail_parameter.md')
converted_test_string = markdown.markdown(test_string, extensions=[verto_extension_default_thumbnail_override])
@@ -170,7 +170,7 @@ class InteractiveTagTest(ProcessorTest):
'''Test the custom thumbnail for a whole page interactive is not required when overriden.'''
verto_extension_custom_thumbnail_override = VertoExtension(
processors=[self.processor_name],
- custom_settings={'add_custom_interactive_thumbnails_to_required_files': False}
+ settings={'add_custom_interactive_thumbnails_to_required_files': False}
)
test_string = self.read_test_file(self.processor_name, 'whole_page_with_thumbnail_parameter.md')
converted_test_string = markdown.markdown(test_string, extensions=[verto_extension_custom_thumbnail_override])
@@ -191,7 +191,7 @@ class InteractiveTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'parameters_true.md')
@@ -215,7 +215,7 @@ class InteractiveTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
@@ -241,7 +241,7 @@ class InteractiveTagTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'parameters_and_thumbnail_true.md')
diff --git a/verto/tests/PanelTest.py b/verto/tests/PanelTest.py
index 6463ee6..9ca7784 100644
--- a/verto/tests/PanelTest.py
+++ b/verto/tests/PanelTest.py
@@ -380,7 +380,7 @@ class PanelTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'type_false.md')
@@ -404,7 +404,7 @@ class PanelTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'subtitle_true.md')
@@ -428,7 +428,7 @@ class PanelTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'expanded_true.md')
@@ -452,7 +452,7 @@ class PanelTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'subtitle_true_not_provided.md')
diff --git a/verto/tests/VideoTest.py b/verto/tests/VideoTest.py
index 426c167..7354eb5 100644
--- a/verto/tests/VideoTest.py
+++ b/verto/tests/VideoTest.py
@@ -198,7 +198,7 @@ class VideoTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'url_false.md')
@@ -222,7 +222,7 @@ class VideoTest(ProcessorTest):
}
verto_extension_custom_rules = VertoExtension(
processors=[self.processor_name],
- custom_settings=settings
+ settings=settings
)
test_string = self.read_test_file(self.processor_name, 'title_true_not_provided.md')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 2
} | 0.11 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip3 install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==2.10.1
Markdown==2.6.11
MarkupSafe==2.0.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
python-slugify==3.0.2
text-unidecode==1.2
tomli==1.2.3
typing_extensions==4.1.1
-e git+https://github.com/uccser/verto.git@d748d104310ff8a38208f0acb34b3252070ac180#egg=verto
zipp==3.6.0
| name: verto
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==2.10.1
- markdown==2.6.11
- markupsafe==2.0.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- python-slugify==3.0.2
- text-unidecode==1.2
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/verto
| [
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_custom_arguments_alignment_true",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_custom_arguments_alignment_true_missing_argument",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_custom_arguments_source_true",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_custom_arguments_source_true_missing_argument",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_custom_arguments_indented_and_type_required",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_custom_arguments_indented_and_type_required_type_not_provided",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_custom_arguments_indented_required",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_custom_arguments_indented_required_not_provided",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_custom_arguments_type_required",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_custom_arguments_file_true",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_custom_arguments_file_true_not_provided",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_custom_arguments_link_false",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_custom_arguments_text_false",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_custom_arguments_text_false_file_true",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_argument_rules_for_multiple_tags",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_argument_rules_for_multiple_tags_error",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_argument_rules_incorrect_processor_argument_error",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_argument_rules_incorrect_processor_error",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_arguments_rules_on_creation",
"verto/tests/FrameTest.py::FrameTest::test_custom_argument_rules_link_false",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_custom_arguments_reference_text_true_not_provided",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_custom_arguments_alt_false",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_custom_arguments_alt_false_source_true",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_custom_arguments_hover_true",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_custom_arguments_hover_true_not_provided",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_custom_arguments_alt_false",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_custom_arguments_alt_false_source_true",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_custom_arguments_hover_true",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_custom_arguments_hover_true_not_provided",
"verto/tests/ImageTagTest.py::ImageTagTest::test_custom_arguments_alt_false",
"verto/tests/ImageTagTest.py::ImageTagTest::test_custom_arguments_alt_false_source_true",
"verto/tests/ImageTagTest.py::ImageTagTest::test_custom_arguments_hover_true",
"verto/tests/ImageTagTest.py::ImageTagTest::test_custom_arguments_hover_true_not_provided",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_arguments_parameters_and_thumbnail_true",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_arguments_parameters_true",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_arguments_text_true_not_provided",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_arguments_thumbnail_true",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_thumbnail_not_in_required_files_with_override",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_default_thumbnail_not_in_required_files_with_override",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_custom_arguments_parameters_and_thumbnail_true",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_custom_arguments_parameters_true",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_custom_arguments_thumbnail_true",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_custom_thumbnail_not_in_required_files_with_override",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_default_thumbnail_not_in_required_files_with_override",
"verto/tests/PanelTest.py::PanelTest::test_custom_arguments_expanded_true",
"verto/tests/PanelTest.py::PanelTest::test_custom_arguments_subtitle_true",
"verto/tests/PanelTest.py::PanelTest::test_custom_arguments_subtitle_true_not_provided",
"verto/tests/PanelTest.py::PanelTest::test_custom_arguments_type_false",
"verto/tests/VideoTest.py::VideoTest::test_custom_argument_rules_title_true_not_provided",
"verto/tests/VideoTest.py::VideoTest::test_url_false_custom_argument_rules"
] | [] | [
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_doc_example_basic",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_doc_example_override_html",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_false",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_invalid_prefix",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_missing_content",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_no_content",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_with_link",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_with_markdown_formatting",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_footer_with_multiple_dash_prefix",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_missing_end_tag",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_missing_start_tag",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_multiple_blockquotes",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_no_footer",
"verto/tests/BlockquoteTest.py::BlockquoteTest::test_parses_blank",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_boxed_text_type",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_doc_example_basic",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_doc_example_override_html",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_indented_and_type_boxed_text",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_indented_boxed_text",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_indented_value_no",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_multiple_boxed_text",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_no_boxed_text",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_recursive_boxed_text",
"verto/tests/BoxedTextTest.py::BoxedTextTest::test_single_boxed_text",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_contains_button",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_contains_file_link_button",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_contains_missing_button",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_contains_multiple_buttons",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_doc_example_basic",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_doc_example_file",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_doc_example_override_html",
"verto/tests/ButtonLinkTest.py::ButtonLinkTest::test_no_button",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_processors_after_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_processors_and_custom_templates_after_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_processors_and_custom_templates_on_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_processors_on_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_templates_after_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_custom_templates_on_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_default_processors_on_creation",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_multiline_custom_templates",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_multiple_calls",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_multiple_calls_without_clearing",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_reset_templates_after_custom",
"verto/tests/ConfigurationTest.py::ConfigurationTest::test_unique_custom_processors",
"verto/tests/FrameTest.py::FrameTest::test_doc_example_basic",
"verto/tests/FrameTest.py::FrameTest::test_doc_example_override_html",
"verto/tests/FrameTest.py::FrameTest::test_example_no_link",
"verto/tests/FrameTest.py::FrameTest::test_example_single_quote_argument_error",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_custom_arguments_reference_text_true",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_doc_example_basic",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_doc_example_override_html",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_leading_and_trailing_inline_text",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_leading_inline_text",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_multiple_reference_text",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_multiple_terms",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_multiple_word_term",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_reference_text_given",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_single_word_term",
"verto/tests/GlossaryLinkTest.py::GlossaryLinkTest::test_trailing_inline_text",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_align_center",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_align_left",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_align_right",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_align_undefined_error",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_alt_hover_caption",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_false",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_true",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_true_missing_end_tag",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_true_not_provided",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_true_not_provided_numbered_list",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_caption_true_numbered_list_missing_end_tag",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_contains_alt",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_contains_caption_link",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_contains_hover_text",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_contains_multiple_images_some_captions",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_contains_source",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_doc_example_2_override_html",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_doc_example_basic",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_doc_example_override_html",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_external_image",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_in_image_tag",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_in_numbered_list",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_invalid_width_value_1",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_invalid_width_value_2",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_width_value",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_width_value_external_image",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_image_width_value_no_units",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_missing_alt_parameter",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_multiple_images_captions_true",
"verto/tests/ImageContainerTest.py::ImageContainerTest::test_no_caption",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_argument_caption",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_argument_caption_link",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_argument_hover_text",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_argument_source_link",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_basic_usage",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_doc_example_override_html",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_external_image",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_image_invalid_width_value_1",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_image_invalid_width_value_2",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_image_width_value",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_image_width_value_external_image",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_image_width_value_no_units",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_internal_image",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_missing_argument_alt",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_multiple_internal_images",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_numbered_list",
"verto/tests/ImageInlineTest.py::ImageInlineTest::test_table_embed",
"verto/tests/ImageTagTest.py::ImageTagTest::test_align_center",
"verto/tests/ImageTagTest.py::ImageTagTest::test_align_left",
"verto/tests/ImageTagTest.py::ImageTagTest::test_align_right",
"verto/tests/ImageTagTest.py::ImageTagTest::test_align_undefined_error",
"verto/tests/ImageTagTest.py::ImageTagTest::test_alt_hover_caption_false",
"verto/tests/ImageTagTest.py::ImageTagTest::test_caption",
"verto/tests/ImageTagTest.py::ImageTagTest::test_caption_false",
"verto/tests/ImageTagTest.py::ImageTagTest::test_caption_link_error",
"verto/tests/ImageTagTest.py::ImageTagTest::test_contains_hover_text",
"verto/tests/ImageTagTest.py::ImageTagTest::test_contains_source",
"verto/tests/ImageTagTest.py::ImageTagTest::test_doc_example_2_override_html",
"verto/tests/ImageTagTest.py::ImageTagTest::test_doc_example_basic",
"verto/tests/ImageTagTest.py::ImageTagTest::test_doc_example_override_html",
"verto/tests/ImageTagTest.py::ImageTagTest::test_external_image",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_in_numbered_list",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_invalid_width_value_1",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_invalid_width_value_2",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_width_value",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_width_value_external_image",
"verto/tests/ImageTagTest.py::ImageTagTest::test_image_width_value_no_units",
"verto/tests/ImageTagTest.py::ImageTagTest::test_invalid_caption_parameter",
"verto/tests/ImageTagTest.py::ImageTagTest::test_missing_alt_parameter",
"verto/tests/ImageTagTest.py::ImageTagTest::test_multiple_images_captions_false",
"verto/tests/ImageTagTest.py::ImageTagTest::test_no_caption",
"verto/tests/ImageTagTest.py::ImageTagTest::test_source_hover_no_caption",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_contains_multiple_interactives_some_text",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_custom_thumbnail_in_required_files",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_default_thumbnail_in_required_files",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_doc_example_override_html",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_doc_example_whole_page",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_iframe",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_interactive_in_interactive_tag",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_invalid_type",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_missing_type",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_multiple_interactives",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_multiple_interactives_text_true",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_no_text",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_text_false",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_text_true_missing_end_tag",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_text_true_not_provided",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_whole_page_external_thumbnail",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_whole_page_parameters",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_whole_page_thumbnail",
"verto/tests/InteractiveContainerTest.py::InteractiveContainerTest::test_whole_page_thumbnail_parameters",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_custom_thumbnail_in_required_files",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_default_thumbnail_in_required_files",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_doc_example_iframe",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_doc_example_in_page",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_doc_example_override_html",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_iframe_parameters",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_in_page_missing_name",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_invalid_type",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_missing_type",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_multiple_interactives",
"verto/tests/InteractiveTagTest.py::InteractiveTagTest::test_whole_page_text",
"verto/tests/PanelTest.py::PanelTest::test_contains_inner_image",
"verto/tests/PanelTest.py::PanelTest::test_contains_inner_panel",
"verto/tests/PanelTest.py::PanelTest::test_contains_inner_panel_missing_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_contains_multiple_panels",
"verto/tests/PanelTest.py::PanelTest::test_doc_example_basic",
"verto/tests/PanelTest.py::PanelTest::test_doc_example_override_html",
"verto/tests/PanelTest.py::PanelTest::test_heading_incorrect_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_heading_invalid_subtitle_argument",
"verto/tests/PanelTest.py::PanelTest::test_heading_missing_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_heading_no_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_heading_subtitle_false",
"verto/tests/PanelTest.py::PanelTest::test_heading_subtitle_false_h2_heading_in_panel",
"verto/tests/PanelTest.py::PanelTest::test_heading_subtitle_no_whitespace",
"verto/tests/PanelTest.py::PanelTest::test_heading_with_punctuation",
"verto/tests/PanelTest.py::PanelTest::test_heading_with_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_heading_with_subtitle_h2_heading_in_panel",
"verto/tests/PanelTest.py::PanelTest::test_heading_with_subtitle_with_punctuation",
"verto/tests/PanelTest.py::PanelTest::test_incorrect_heading_no_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_incorrect_heading_with_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_missing_end_tag",
"verto/tests/PanelTest.py::PanelTest::test_missing_heading_missing_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_missing_heading_with_subtitle",
"verto/tests/PanelTest.py::PanelTest::test_missing_start_tag",
"verto/tests/PanelTest.py::PanelTest::test_missing_tag_inner",
"verto/tests/PanelTest.py::PanelTest::test_panel_block_missing_whitespace",
"verto/tests/PanelTest.py::PanelTest::test_panel_in_numbered_list",
"verto/tests/PanelTest.py::PanelTest::test_panel_only_in_numbered_list",
"verto/tests/PanelTest.py::PanelTest::test_parses_always_expanded_panel",
"verto/tests/PanelTest.py::PanelTest::test_parses_blank",
"verto/tests/PanelTest.py::PanelTest::test_parses_blank_lines_multiple_paragraphs",
"verto/tests/PanelTest.py::PanelTest::test_parses_expanded_panel",
"verto/tests/PanelTest.py::PanelTest::test_parses_no_blank_lines_single_paragraph",
"verto/tests/VideoTest.py::VideoTest::test_contains_multiple_videos",
"verto/tests/VideoTest.py::VideoTest::test_contains_no_video",
"verto/tests/VideoTest.py::VideoTest::test_contains_title",
"verto/tests/VideoTest.py::VideoTest::test_doc_example_basic",
"verto/tests/VideoTest.py::VideoTest::test_doc_example_override_html",
"verto/tests/VideoTest.py::VideoTest::test_missing_identifier",
"verto/tests/VideoTest.py::VideoTest::test_multiple_vimeo_links",
"verto/tests/VideoTest.py::VideoTest::test_multiple_youtube_links",
"verto/tests/VideoTest.py::VideoTest::test_unsupported_video_type",
"verto/tests/VideoTest.py::VideoTest::test_vimeo_link",
"verto/tests/VideoTest.py::VideoTest::test_vimeo_player_link",
"verto/tests/VideoTest.py::VideoTest::test_youtube_and_vimeo_links",
"verto/tests/VideoTest.py::VideoTest::test_youtube_be_link",
"verto/tests/VideoTest.py::VideoTest::test_youtube_embed_link",
"verto/tests/VideoTest.py::VideoTest::test_youtube_watch_link"
] | [] | MIT License | 5,638 | 664 | [
"verto/Verto.py",
"verto/VertoExtension.py"
] |
|
PythonCharmers__python-future-519 | a3e303fa5d3722da43c52229fbe12085e15ac26e | 2019-10-23 13:40:07 | 4657ad23de79a541ebcc7a06f1b9ad60172ad3c4 | diff --git a/src/libfuturize/fixes/__init__.py b/src/libfuturize/fixes/__init__.py
index 7de304d..0b56250 100644
--- a/src/libfuturize/fixes/__init__.py
+++ b/src/libfuturize/fixes/__init__.py
@@ -50,7 +50,7 @@ lib2to3_fix_names_stage2 = set([
'lib2to3.fixes.fix_getcwdu',
# 'lib2to3.fixes.fix_imports', # called by libfuturize.fixes.fix_future_standard_library
# 'lib2to3.fixes.fix_imports2', # we don't handle this yet (dbm)
- 'lib2to3.fixes.fix_input',
+ # 'lib2to3.fixes.fix_input', # Called conditionally by libfuturize.fixes.fix_input
'lib2to3.fixes.fix_itertools',
'lib2to3.fixes.fix_itertools_imports',
'lib2to3.fixes.fix_filter',
@@ -86,6 +86,7 @@ libfuturize_fix_names_stage2 = set([
'libfuturize.fixes.fix_future_builtins',
'libfuturize.fixes.fix_future_standard_library',
'libfuturize.fixes.fix_future_standard_library_urllib',
+ 'libfuturize.fixes.fix_input',
'libfuturize.fixes.fix_metaclass',
'libpasteurize.fixes.fix_newstyle',
'libfuturize.fixes.fix_object',
diff --git a/src/libfuturize/fixes/fix_input.py b/src/libfuturize/fixes/fix_input.py
new file mode 100644
index 0000000..8a43882
--- /dev/null
+++ b/src/libfuturize/fixes/fix_input.py
@@ -0,0 +1,32 @@
+"""
+Fixer for input.
+
+Does a check for `from builtins import input` before running the lib2to3 fixer.
+The fixer will not run when the input is already present.
+
+
+this:
+ a = input()
+becomes:
+ from builtins import input
+ a = eval(input())
+
+and this:
+ from builtins import input
+ a = input()
+becomes (no change):
+ from builtins import input
+ a = input()
+"""
+
+import lib2to3.fixes.fix_input
+from lib2to3.fixer_util import does_tree_import
+
+
+class FixInput(lib2to3.fixes.fix_input.FixInput):
+ def transform(self, node, results):
+
+ if does_tree_import('builtins', 'input', node):
+ return
+
+ return super(FixInput, self).transform(node, results)
| raw_input breaking with multiple futurize passes
Let's start with a python file (test.py) containing a single line:
`test = raw_input("Test: ")`
Pass 1: Running **futurize.exe test.py --both-stages -w** converts this to:
```
from builtins import input
test = input("Input: ")
```
Pass 2: Running **futurize.exe test.py --both-stages -w** again converts this to:
```
from builtins import input
test = eval(input("Input: "))
```
The second pass is incorrect. The presence of "from builtins import input" should indicate that the bare input has already been corrected.
Note that **--both-stages** can be replaced with **--fix lib2to3.fixes.fix_input --fix lib2to3.fixes.fix_raw_input --fix libfuturize.fixes.fix_future_builtins**.
| PythonCharmers/python-future | diff --git a/tests/test_future/test_futurize.py b/tests/test_future/test_futurize.py
index f220114..0d7c42d 100644
--- a/tests/test_future/test_futurize.py
+++ b/tests/test_future/test_futurize.py
@@ -436,6 +436,27 @@ class TestFuturizeSimple(CodeHandler):
"""
self.convert_check(before, after, ignore_imports=False, run=False)
+ def test_input_without_import(self):
+ before = """
+ a = input()
+ """
+ after = """
+ from builtins import input
+ a = eval(input())
+ """
+ self.convert_check(before, after, ignore_imports=False, run=False)
+
+ def test_input_with_import(self):
+ before = """
+ from builtins import input
+ a = input()
+ """
+ after = """
+ from builtins import input
+ a = input()
+ """
+ self.convert_check(before, after, ignore_imports=False, run=False)
+
def test_xrange(self):
"""
The ``from builtins import range`` line was being added to the
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_added_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.18 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"unittest2"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
exceptiongroup==1.2.2
-e git+https://github.com/PythonCharmers/python-future.git@a3e303fa5d3722da43c52229fbe12085e15ac26e#egg=future
importlib-metadata==6.7.0
iniconfig==2.0.0
linecache2==1.0.0
packaging==24.0
pluggy==1.2.0
pytest==7.4.4
six==1.17.0
tomli==2.0.1
traceback2==1.4.0
typing_extensions==4.7.1
unittest2==1.1.0
zipp==3.15.0
| name: python-future
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- argparse==1.4.0
- exceptiongroup==1.2.2
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- linecache2==1.0.0
- packaging==24.0
- pluggy==1.2.0
- pytest==7.4.4
- six==1.17.0
- tomli==2.0.1
- traceback2==1.4.0
- typing-extensions==4.7.1
- unittest2==1.1.0
- zipp==3.15.0
prefix: /opt/conda/envs/python-future
| [
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_input_with_import"
] | [] | [
"tests/test_future/test_futurize.py::TestLibFuturize::test_correct_exit_status",
"tests/test_future/test_futurize.py::TestLibFuturize::test_is_encoding_comment",
"tests/test_future/test_futurize.py::TestLibFuturize::test_is_shebang_comment",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_UserList",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_already_future_division",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_apply",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_cmp",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_division",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_encoding_comments_kept_at_top",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_exception_syntax",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_execfile",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_import_builtins",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_input_without_import",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_literal_prefixes_are_not_stripped",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_multiline_future_import",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_oldstyle_classes",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_oldstyle_classes_iterator",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_raw_input",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_shebang_blank_with_future_division_import",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_shebang_blank_with_print_import",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_shebang_comment",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_shebang_docstring",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_source_coding_utf8",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_super",
"tests/test_future/test_futurize.py::TestFuturizeSimple::test_xrange",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test___future___import_position",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_apply",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_basestring",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_basestring_issue_156",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_exceptions",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_issue_45",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_long_int_literals",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_next_1",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_octal_literals",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_oldstyle_classes",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_order_future_lines",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_print",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_print_already_function",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_range_necessary_list_calls",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_safe_division",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_safe_division_overloaded",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_safe_futurize_imports",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_stdlib_modules_not_changed",
"tests/test_future/test_futurize.py::TestFuturizeStage1::test_xrange",
"tests/test_future/test_futurize.py::TestFuturizeAllImports::test_all_imports"
] | [] | MIT License | 5,641 | 676 | [
"src/libfuturize/fixes/__init__.py"
] |
|
andreroggeri__pynubank-79 | 4e3c7ca641da2b557137c649911e222e752eea1c | 2019-10-27 17:50:13 | 0f3a907d943a77e21ad78af1fee651e40efa723d | andreroggeri: Nice catch ! Would you mind adding a new test to catch this scenario ?
coveralls:
[](https://coveralls.io/builds/26660815)
Coverage remained the same at 100.0% when pulling **e59c3d60fe03e142740a92fb3b3f2b97fc8c5928 on marcoaaguiar:master** into **4e3c7ca641da2b557137c649911e222e752eea1c on andreroggeri:master**.
marcoaaguiar: I dont have experience with pytest but created a few test to cover the attributes of NuException. Could you check if it is correct? | diff --git a/pynubank/nubank.py b/pynubank/nubank.py
index d38a582..f7174db 100644
--- a/pynubank/nubank.py
+++ b/pynubank/nubank.py
@@ -18,9 +18,8 @@ PAYMENT_EVENT_TYPES = (
class NuException(Exception):
-
def __init__(self, status_code, response, url):
- super().__init__()
+ super().__init__(f'The request made failed with HTTP status code {status_code}')
self.url = url
self.status_code = status_code
self.response = response
@@ -81,8 +80,7 @@ class Nubank:
def _handle_response(self, response: Response) -> dict:
if response.status_code != 200:
- raise NuException(f'The request made failed with HTTP status code {response.status_code}',
- response.status_code, response.json())
+ raise NuException(response.status_code, response.json(), response.url)
return response.json()
| NuException does not show which status code I actually got
exception looks like this:
```
Traceback (most recent call last):
File "main.py", line 11, in <module>
nu.authenticate_with_qr_code(os.environ['NU_CPF'], os.environ['NU_PWD'], uuid)
File "/usr/local/lib/python3.8/site-packages/pynubank/nubank.py", line 96, in authenticate_with_qr_code
auth_data = self._password_auth(cpf, password)
File "/usr/local/lib/python3.8/site-packages/pynubank/nubank.py", line 79, in _password_auth
data = self._handle_response(response)
File "/usr/local/lib/python3.8/site-packages/pynubank/nubank.py", line 84, in _handle_response
raise NuException(f'The request made failed with HTTP status code {response.status_code}',
pynubank.nubank.NuException
make: *** [run] Error 1
```
its probably a 422, but I think it should show it here... | andreroggeri/pynubank | diff --git a/requirements-test.txt b/requirements-test.txt
index 39f5d78..a64d515 100644
--- a/requirements-test.txt
+++ b/requirements-test.txt
@@ -1,4 +1,4 @@
-r requirements.txt
-pytest==5.0.1
+pytest==5.2.2
pytest-cov==2.7.1
coveralls==1.8.1
diff --git a/tests/test_nubank_client.py b/tests/test_nubank_client.py
index ba1a5ba..83c6a46 100644
--- a/tests/test_nubank_client.py
+++ b/tests/test_nubank_client.py
@@ -684,7 +684,7 @@ def test_get_qr_code():
assert isinstance(qr, QRCode)
[email protected]("http_status", [
[email protected]('http_status', [
100, 101, 102, 103,
201, 202, 203, 204, 205, 206, 207, 208, 226,
300, 301, 302, 303, 304, 305, 306, 307, 308,
@@ -699,3 +699,43 @@ def test_nubank_request_handler_throws_exception_on_status_different_of_200(http
client = Nubank()
with pytest.raises(NuException):
client._handle_response(response)
+
+
[email protected](Nubank, '_update_proxy_urls', fake_update_proxy)
+def test_nubank_request_handler_throws_exception_with_status_code_attribute():
+ http_status = 400
+ response = create_fake_response({}, http_status)
+ client = Nubank()
+ with pytest.raises(NuException) as exception_info:
+ client._handle_response(response)
+
+ assert exception_info.value.status_code == http_status
+
+
[email protected](Nubank, '_update_proxy_urls', fake_update_proxy)
+def test_nubank_request_handler_throws_exception_status_code_in_the_exception_message():
+ http_status = 400
+ response = create_fake_response({}, http_status)
+ client = Nubank()
+ with pytest.raises(NuException, match=fr'.*{http_status}.*'):
+ client._handle_response(response)
+
+
[email protected](Nubank, '_update_proxy_urls', fake_update_proxy)
+def test_nubank_request_handler_throws_exception_with_url_attribute():
+ response = create_fake_response({}, 400)
+ client = Nubank()
+ with pytest.raises(NuException) as exception_info:
+ client._handle_response(response)
+
+ assert exception_info.value.url == response.url
+
+
[email protected](Nubank, '_update_proxy_urls', fake_update_proxy)
+def test_nubank_request_handler_throws_exception_with_response_attribute():
+ response = create_fake_response({}, 400)
+ client = Nubank()
+ with pytest.raises(NuException) as exception_info:
+ client._handle_response(response)
+
+ assert exception_info.value.response == response.json()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 1
} | 1.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"coveralls"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi==2025.1.31
chardet==3.0.4
coverage==7.8.0
coveralls==4.0.1
docopt==0.6.2
exceptiongroup==1.2.2
idna==2.8
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
-e git+https://github.com/andreroggeri/pynubank.git@4e3c7ca641da2b557137c649911e222e752eea1c#egg=pynubank
pytest==8.3.5
pytest-cov==6.0.0
qrcode==6.1
requests==2.22.0
six==1.17.0
tomli==2.2.1
urllib3==1.25.11
| name: pynubank
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- chardet==3.0.4
- coverage==7.8.0
- coveralls==4.0.1
- docopt==0.6.2
- exceptiongroup==1.2.2
- idna==2.8
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- qrcode==6.1
- requests==2.22.0
- six==1.17.0
- tomli==2.2.1
- urllib3==1.25.11
prefix: /opt/conda/envs/pynubank
| [
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_with_status_code_attribute",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_status_code_in_the_exception_message",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_with_url_attribute",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_with_response_attribute"
] | [] | [
"tests/test_nubank_client.py::test_authenticate_with_qr_code_succeeds",
"tests/test_nubank_client.py::test_authentication_with_qr_code_failure_raise_exception",
"tests/test_nubank_client.py::test_get_card_feed",
"tests/test_nubank_client.py::test_get_bills",
"tests/test_nubank_client.py::test_get_bill_details",
"tests/test_nubank_client.py::test_get_card_statements",
"tests/test_nubank_client.py::test_get_account_balance",
"tests/test_nubank_client.py::test_get_account_feed",
"tests/test_nubank_client.py::test_get_account_statements",
"tests/test_nubank_client.py::test_grapql_query_raises_exeption",
"tests/test_nubank_client.py::test_get_qr_code",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[100]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[101]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[102]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[103]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[201]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[202]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[203]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[204]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[205]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[206]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[207]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[208]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[226]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[300]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[301]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[302]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[303]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[304]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[305]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[306]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[307]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[308]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[400]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[401]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[402]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[403]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[404]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[405]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[406]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[407]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[408]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[409]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[410]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[411]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[412]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[413]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[414]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[415]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[416]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[417]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[418]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[420]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[421]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[422]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[423]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[424]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[426]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[428]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[429]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[431]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[440]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[444]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[449]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[450]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[451]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[495]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[496]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[497]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[498]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[499]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[500]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[501]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[502]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[503]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[504]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[505]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[506]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[507]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[508]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[509]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[510]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[511]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[520]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[521]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[522]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[523]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[524]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[525]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[526]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[527]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[530]",
"tests/test_nubank_client.py::test_nubank_request_handler_throws_exception_on_status_different_of_200[598]"
] | [] | MIT License | 5,672 | 246 | [
"pynubank/nubank.py"
] |
Erotemic__ubelt-64 | 89085f148ace79aa0dfdc17383423e3dfe138017 | 2019-10-27 22:57:44 | 89085f148ace79aa0dfdc17383423e3dfe138017 | diff --git a/ubelt/util_path.py b/ubelt/util_path.py
index c028ba2..5241209 100644
--- a/ubelt/util_path.py
+++ b/ubelt/util_path.py
@@ -291,9 +291,9 @@ def ensuredir(dpath, mode=0o1777, verbose=None, recreate=False):
>>> assert exists(dpath)
>>> os.rmdir(dpath)
"""
- if verbose is None: # nocover
+ if verbose is None:
verbose = 0
- if isinstance(dpath, (list, tuple)): # nocover
+ if isinstance(dpath, (list, tuple)):
dpath = join(*dpath)
if recreate:
@@ -301,15 +301,15 @@ def ensuredir(dpath, mode=0o1777, verbose=None, recreate=False):
ub.delete(dpath, verbose=verbose)
if not exists(dpath):
- if verbose: # nocover
- print('Ensuring new directory (%r)' % dpath)
+ if verbose:
+ print('Ensuring directory (creating {!r})'.format(dpath))
if sys.version_info.major == 2: # nocover
os.makedirs(normpath(dpath), mode=mode)
else:
os.makedirs(normpath(dpath), mode=mode, exist_ok=True)
else:
- if verbose: # nocover
- print('Ensuring existing directory (%r)' % dpath)
+ if verbose:
+ print('Ensuring directory (existing {!r})'.format(dpath))
return dpath
| Request: ensure director verbose mentions if it created a new directory
It would be nice if the verbose mode actually said something like
Ensuring directory ('...') status: existing
Ensuring directory ('...') status: creating
Maybe have a second verbose level if you want to maintain the original way. | Erotemic/ubelt | diff --git a/tests/test_path.py b/tests/test_path.py
index 49ea0c8..fafadd2 100644
--- a/tests/test_path.py
+++ b/tests/test_path.py
@@ -55,8 +55,31 @@ def test_augpath_dpath():
def test_ensuredir_recreate():
- ub.ensuredir('foo', recreate=True)
- ub.ensuredir('foo/bar')
- assert exists('foo/bar')
- ub.ensuredir('foo', recreate=True)
- assert not exists('foo/bar')
+ base = ub.ensure_app_cache_dir('ubelt/tests')
+ folder = join(base, 'foo')
+ member = join(folder, 'bar')
+ ub.ensuredir(folder, recreate=True)
+ ub.ensuredir(member)
+ assert exists(member)
+ ub.ensuredir(folder, recreate=True)
+ assert not exists(member)
+
+
+def test_ensuredir_verbosity():
+ base = ub.ensure_app_cache_dir('ubelt/tests')
+
+ with ub.CaptureStdout() as cap:
+ ub.ensuredir(join(base, 'foo'), verbose=0)
+ assert cap.text == ''
+ # None defaults to verbose=0
+ with ub.CaptureStdout() as cap:
+ ub.ensuredir((base, 'foo'), verbose=None)
+ assert cap.text == ''
+
+ ub.delete(join(base, 'foo'))
+ with ub.CaptureStdout() as cap:
+ ub.ensuredir(join(base, 'foo'), verbose=1)
+ assert 'creating' in cap.text
+ with ub.CaptureStdout() as cap:
+ ub.ensuredir(join(base, 'foo'), verbose=1)
+ assert 'existing' in cap.text
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 1
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements/runtime.txt",
"requirements/tests.txt",
"requirements/optional.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
codecov==2.1.13
coverage==6.2
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
numpy==1.19.5
ordered-set==4.0.2
packaging==21.3
pluggy==1.0.0
progiter==2.0.0
py==1.11.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
requests==2.27.1
six==1.17.0
timerit==1.1.0
tomli==1.2.3
typing_extensions==4.1.1
-e git+https://github.com/Erotemic/ubelt.git@89085f148ace79aa0dfdc17383423e3dfe138017#egg=ubelt
urllib3==1.26.20
xdoctest==1.1.6
xxhash==3.2.0
zipp==3.6.0
| name: ubelt
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- codecov==2.1.13
- coverage==6.2
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- numpy==1.19.5
- ordered-set==4.0.2
- packaging==21.3
- pluggy==1.0.0
- progiter==2.0.0
- py==1.11.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- requests==2.27.1
- six==1.17.0
- timerit==1.1.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- xdoctest==1.1.6
- xxhash==3.2.0
- zipp==3.6.0
prefix: /opt/conda/envs/ubelt
| [
"tests/test_path.py::test_ensuredir_verbosity"
] | [] | [
"tests/test_path.py::test_tempdir",
"tests/test_path.py::test_augpath_identity",
"tests/test_path.py::test_augpath_dpath",
"tests/test_path.py::test_ensuredir_recreate"
] | [] | Apache License 2.0 | 5,677 | 383 | [
"ubelt/util_path.py"
] |
|
DataDog__dd-trace-py-1116 | 63b0463482dfad78f7801ef784d709950ff3cea1 | 2019-10-28 15:14:30 | 63b0463482dfad78f7801ef784d709950ff3cea1 | brettlangdon: Failing tests are segfaults with sqlalchemy, not related to any changes in this PR | diff --git a/ddtrace/internal/logger.py b/ddtrace/internal/logger.py
index f14037cc9..6b9930531 100644
--- a/ddtrace/internal/logger.py
+++ b/ddtrace/internal/logger.py
@@ -110,7 +110,8 @@ class DDLogger(logging.Logger):
if logging_bucket.bucket != current_bucket:
# Append count of skipped messages if we have skipped some since our last logging
if logging_bucket.skipped:
- record.msg = '{}, {} additional messages skipped'.format(record.msg, logging_bucket.skipped)
+ record.msg = '{}, %s additional messages skipped'.format(record.msg)
+ record.args = record.args + (logging_bucket.skipped, )
# Reset our bucket
self.buckets[key] = DDLogger.LoggingBucket(current_bucket, 0)
| DDLogger rewrites LogRecord.msg, which causes Sentry events duplication
Sentry uses `LogRecord.msg` to identify log events. LogRecord.msg is the log message template, to be formatted on demand.
When rewriting `msg`, one should not enrich it with arbitrary values, like `logging_bucket.skipped`.
The line
```
record.msg = '{}, {} additional messages skipped'.format(record.msg, logging_bucket.skipped)
```
should be something like:
```
record.msg = '{}, %s additional messages skipped'.format(record.msg)
record.args = record.args + (logging_bucket.skipped,)
```
Culprit:
https://github.com/DataDog/dd-trace-py/blob/914cbca4ba5ec53ff17cb67164cb51b7bcd91ac2/ddtrace/internal/logger.py#L113
Example of message duplication:

| DataDog/dd-trace-py | diff --git a/tests/internal/test_logger.py b/tests/internal/test_logger.py
index 2148da32d..206b7bb76 100644
--- a/tests/internal/test_logger.py
+++ b/tests/internal/test_logger.py
@@ -263,7 +263,9 @@ class DDLoggerTestCase(BaseTestCase):
log = get_logger('test.logger')
# Create log record to handle
- record = self._make_record(log)
+ original_msg = 'hello %s'
+ original_args = (1, )
+ record = self._make_record(log, msg=original_msg, args=(1, ))
# Create a bucket entry for this record
key = (record.name, record.levelno, record.pathname, record.lineno)
@@ -277,7 +279,9 @@ class DDLoggerTestCase(BaseTestCase):
# We passed to base Logger.handle
base_handle.assert_called_once_with(record)
- self.assertEqual(record.msg, 'test, 20 additional messages skipped')
+ self.assertEqual(record.msg, original_msg + ', %s additional messages skipped')
+ self.assertEqual(record.args, original_args + (20, ))
+ self.assertEqual(record.getMessage(), 'hello 1, 20 additional messages skipped')
def test_logger_handle_bucket_key(self):
"""
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_media"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.30 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-mock",
"opentracing",
"mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
-e git+https://github.com/DataDog/dd-trace-py.git@63b0463482dfad78f7801ef784d709950ff3cea1#egg=ddtrace
exceptiongroup==1.2.2
iniconfig==2.1.0
mock==5.2.0
opentracing==2.4.0
packaging==24.2
pluggy==1.5.0
psutil==7.0.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-mock==3.14.0
tomli==2.2.1
| name: dd-trace-py
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- exceptiongroup==1.2.2
- iniconfig==2.1.0
- mock==5.2.0
- opentracing==2.4.0
- packaging==24.2
- pluggy==1.5.0
- psutil==7.0.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- tomli==2.2.1
prefix: /opt/conda/envs/dd-trace-py
| [
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_handle_bucket_skipped_msg"
] | [] | [
"tests/internal/test_logger.py::DDLoggerTestCase::test_get_logger",
"tests/internal/test_logger.py::DDLoggerTestCase::test_get_logger_parents",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_handle_bucket",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_handle_bucket_key",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_handle_bucket_limited",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_handle_no_limit",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_init",
"tests/internal/test_logger.py::DDLoggerTestCase::test_logger_log"
] | [] | Apache 2.0 or BSD3 | 5,681 | 189 | [
"ddtrace/internal/logger.py"
] |
xen0l__aws-gate-117 | 3f8731054b1e7f6eb8e8b74b3163b450be850722 | 2019-10-29 00:32:01 | 3f8731054b1e7f6eb8e8b74b3163b450be850722 | codecov[bot]: # [Codecov](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=h1) Report
> Merging [#117](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=desc) into [master](https://codecov.io/gh/xen0l/aws-gate/commit/a51722a58f658f4e6a7994fe4302a5328981262b?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #117 +/- ##
==========================================
+ Coverage 99.16% 99.16% +<.01%
==========================================
Files 15 15
Lines 715 717 +2
==========================================
+ Hits 709 711 +2
Misses 6 6
```
| [Impacted Files](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [aws\_gate/cli.py](https://codecov.io/gh/xen0l/aws-gate/pull/117/diff?src=pr&el=tree#diff-YXdzX2dhdGUvY2xpLnB5) | `96.93% <100%> (+0.06%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=footer). Last update [a51722a...55bc12a](https://codecov.io/gh/xen0l/aws-gate/pull/117?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/aws_gate/cli.py b/aws_gate/cli.py
index 0a8fb33..001661a 100644
--- a/aws_gate/cli.py
+++ b/aws_gate/cli.py
@@ -120,17 +120,6 @@ def parse_arguments():
def main():
- # We want to provide default values in cases they are not configured
- # in ~/.aws/config or availabe a environment variables
- default_region = get_default_region()
- if default_region is None:
- default_region = AWS_DEFAULT_REGION
-
- # We try to obtain default profile from the environment or use 'default' to
- # save call to boto3. boto3 will also return'default':
- # https://github.com/boto/boto3/blob/develop/boto3/session.py#L93
- default_profile = os.environ.get("AWS_PROFILE") or AWS_DEFAULT_PROFILE
-
args = parse_arguments()
if not DEBUG:
@@ -158,6 +147,27 @@ def main():
except (ValidationError, ScannerError) as e:
raise ValueError("Invalid configuration provided: {}".format(e))
+ # We want to provide default values in cases they are not configured
+ # in ~/.aws/config or availabe a environment variables
+ default_region = get_default_region()
+ if default_region is None:
+ default_region = AWS_DEFAULT_REGION
+
+ # We try to obtain default profile from the environment or use 'default' to
+ # save a call to boto3. In the environment, we check if we are being called
+ # from aws-vault first or not. Then we return 'default' as boto3 will
+ # https://github.com/boto/boto3/blob/develop/boto3/session.py#L93
+ if "AWS_VAULT" in os.environ:
+ logger.debug(
+ "aws-vault usage detected, defaulting to the AWS profile from $AWS_VAULT"
+ )
+
+ default_profile = (
+ os.environ.get("AWS_VAULT")
+ or os.environ.get("AWS_PROFILE")
+ or AWS_DEFAULT_PROFILE
+ )
+
profile = _get_profile(args=args, config=config, default=default_profile)
region = _get_region(args=args, config=config, default=default_region)
| "Invalid profile provided: default" when using environment variables
I'm using environment variables to configure the AWS credentials for aws-gate and don't have a default profile in .aws/config because of that. (I'm using https://github.com/99designs/aws-vault (eg `aws-vault exec profile_name -- aws-gate list`), but any tool that sets environment variables will work the same).
This leads to the error "Invalid profile provided: default" when running aws-gate (even though valid credentials are available in the environment). I can work around this by adding a dummy default profile in the configuration file, but it would be nice if that wasn't needed.
| xen0l/aws-gate | diff --git a/tests/unit/test_cli.py b/tests/unit/test_cli.py
index b11883c..405f284 100644
--- a/tests/unit/test_cli.py
+++ b/tests/unit/test_cli.py
@@ -1,5 +1,6 @@
import argparse
import unittest
+import os
from unittest.mock import patch, MagicMock, create_autospec, call
from marshmallow import ValidationError
@@ -144,3 +145,22 @@ class TestCli(unittest.TestCase):
self.assertTrue(parser_mock.print_help.called)
self.assertTrue(exit_mock.called)
self.assertEqual(exit_mock.call_args, call(1))
+
+ def test_cli_default_profile_from_aws_vault(self):
+ with patch.dict(os.environ, {"AWS_VAULT": "vault_profile"}), patch(
+ "aws_gate.cli.parse_arguments", return_value=MagicMock(subcommand="list")
+ ), patch("aws_gate.decorators.is_existing_region", return_value=True), patch(
+ "aws_gate.decorators.is_existing_profile", return_value=True
+ ), patch(
+ "aws_gate.cli.list_instances"
+ ), patch(
+ "aws_gate.cli.logging.getLogger"
+ ) as logger_mock, patch(
+ "aws_gate.cli._get_profile"
+ ) as m:
+ main()
+
+ self.assertTrue(m.called)
+ self.assertEqual(m.call_args[1]["default"], "vault_profile")
+
+ self.assertTrue(logger_mock.called)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 1
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": null,
"python": "3.7",
"reqs_path": [
"requirements/requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/xen0l/aws-gate.git@3f8731054b1e7f6eb8e8b74b3163b450be850722#egg=aws_gate
boto3==1.10.50
botocore==1.13.50
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
chardet==3.0.4
coverage==7.2.7
cryptography==2.8
docutils==0.15.2
exceptiongroup==1.2.2
execnet==2.0.2
idna==2.8
importlib-metadata==6.7.0
iniconfig==2.0.0
jmespath==0.10.0
marshmallow==3.2.1
packaging==24.0
pluggy==1.2.0
pycparser==2.21
pytest==7.4.4
pytest-asyncio==0.21.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==5.1.2
requests==2.22.0
s3transfer==0.2.1
six==1.17.0
tomli==2.0.1
typing_extensions==4.7.1
unix-ar==0.2.1
urllib3==1.25.11
wrapt==1.11.2
zipp==3.15.0
| name: aws-gate
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- boto3==1.10.50
- botocore==1.13.50
- cffi==1.15.1
- chardet==3.0.4
- coverage==7.2.7
- cryptography==2.8
- docutils==0.15.2
- exceptiongroup==1.2.2
- execnet==2.0.2
- idna==2.8
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- jmespath==0.10.0
- marshmallow==3.2.1
- packaging==24.0
- pluggy==1.2.0
- pycparser==2.21
- pytest==7.4.4
- pytest-asyncio==0.21.2
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==5.1.2
- requests==2.22.0
- s3transfer==0.2.1
- six==1.17.0
- tomli==2.0.1
- typing-extensions==4.7.1
- unix-ar==0.2.1
- urllib3==1.25.11
- wrapt==1.11.2
- zipp==3.15.0
prefix: /opt/conda/envs/aws-gate
| [
"tests/unit/test_cli.py::TestCli::test_cli_default_profile_from_aws_vault"
] | [] | [
"tests/unit/test_cli.py::TestCli::test_cli_bootstrap",
"tests/unit/test_cli.py::TestCli::test_cli_invalid_config",
"tests/unit/test_cli.py::TestCli::test_cli_list",
"tests/unit/test_cli.py::TestCli::test_cli_param_error",
"tests/unit/test_cli.py::TestCli::test_cli_parse_arguments_unknown_subcommand",
"tests/unit/test_cli.py::TestCli::test_cli_session",
"tests/unit/test_cli.py::TestCli::test_cli_ssh_config",
"tests/unit/test_cli.py::TestCli::test_cli_ssh_proxy",
"tests/unit/test_cli.py::TestCli::test_get_profile_from_args",
"tests/unit/test_cli.py::TestCli::test_get_profile_from_config",
"tests/unit/test_cli.py::TestCli::test_get_profile_from_default",
"tests/unit/test_cli.py::TestCli::test_get_region_from_args",
"tests/unit/test_cli.py::TestCli::test_get_region_from_config",
"tests/unit/test_cli.py::TestCli::test_get_region_from_default"
] | [] | Modified BSD License | 5,687 | 525 | [
"aws_gate/cli.py"
] |
elastic__elasticsearch-py-1048 | fc78c992ef6ceca157f369186760c8c550fb3bc4 | 2019-10-29 15:47:45 | f8b8431411f314efa014585d6ef4cb8a63878f09 | HonzaKral: This is great, Could I just ask you to sign the [CLA](https://www.elastic.co/contributor-agreement/) so that we can merge it?
Thank you!
alexk307: Just signed! Thanks | diff --git a/elasticsearch/connection/base.py b/elasticsearch/connection/base.py
index 8bb93a4b..a3fbd780 100644
--- a/elasticsearch/connection/base.py
+++ b/elasticsearch/connection/base.py
@@ -65,7 +65,7 @@ class Connection(object):
raise TypeError(
"Unsupported equality check for %s and %s" % (self, other)
)
- return True
+ return self.__hash__() == other.__hash__()
def __hash__(self):
return id(self)
diff --git a/elasticsearch/connection_pool.py b/elasticsearch/connection_pool.py
index 05277632..d9bdcd29 100644
--- a/elasticsearch/connection_pool.py
+++ b/elasticsearch/connection_pool.py
@@ -150,6 +150,7 @@ class ConnectionPool(object):
try:
self.connections.remove(connection)
except ValueError:
+ logger.info("Attempted to remove %r, but it does not exist in the connection pool.", connection)
# connection not alive or another thread marked it already, ignore
return
else:
| mark_dead() in ConnectionPool always removes the first connection in the pool
`ConnectionPool`'s `mark_dead()` method is supposed to remove the dead `connection` from the `connections` list:
`self.connections.remove(connection)`
However the equality comparison implementation `__eq__` in the `Connection` class is incorrect. It always returns `True` as long as both objects are `Connection` instances:
```
def __eq__(self, other):
if not isinstance(other, Connection):
raise TypeError(
"Unsupported equality check for %s and %s" % (self, other)
)
return True
```
This defect manifests itself by removing all good connections until the connections is empty. One dead connection will bring down the whole connection pool.
Proposed solution: removing `__eq__` from `Connection`. Thoughts? | elastic/elasticsearch-py | diff --git a/test_elasticsearch/test_connection_pool.py b/test_elasticsearch/test_connection_pool.py
index 923497f1..f5a5f2d8 100644
--- a/test_elasticsearch/test_connection_pool.py
+++ b/test_elasticsearch/test_connection_pool.py
@@ -5,6 +5,7 @@ from elasticsearch.connection_pool import (
RoundRobinSelector,
DummyConnectionPool,
)
+from elasticsearch.connection import Connection
from elasticsearch.exceptions import ImproperlyConfigured
from .test_cases import TestCase
@@ -72,6 +73,17 @@ class TestConnectionPool(TestCase):
[pool.get_connection(), pool.get_connection(), pool.get_connection()],
)
+ def test_new_connection_is_not_marked_dead(self):
+ # Create 10 connections
+ pool = ConnectionPool([(Connection(), {}) for _ in range(10)])
+
+ # Pass in a new connection that is not in the pool to mark as dead
+ new_connection = Connection()
+ pool.mark_dead(new_connection)
+
+ # Nothing should be marked dead
+ self.assertEquals(0, len(pool.dead_count))
+
def test_connection_is_forcibly_resurrected_when_no_live_ones_are_availible(self):
pool = ConnectionPool([(x, {}) for x in range(2)])
pool.dead_count[0] = 1
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 2
} | 7.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[develop]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-xdist pytest-mock pytest-asyncio",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc git curl"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
coverage==6.2
docutils==0.18.1
-e git+https://github.com/elastic/elasticsearch-py.git@fc78c992ef6ceca157f369186760c8c550fb3bc4#egg=elasticsearch
execnet==1.9.0
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
mock==5.2.0
nose==1.3.7
nosexcover==1.0.11
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyaml==23.5.8
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
pytz==2025.2
PyYAML==6.0.1
requests==2.27.1
six==1.17.0
snowballstemmer==2.2.0
Sphinx==1.6.7
sphinx-rtd-theme==1.2.1
sphinxcontrib-jquery==2.0.0
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-websupport==1.2.4
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: elasticsearch-py
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- coverage==6.2
- docutils==0.18.1
- execnet==1.9.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- mock==5.2.0
- nose==1.3.7
- nosexcover==1.0.11
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyaml==23.5.8
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- pytz==2025.2
- pyyaml==6.0.1
- requests==2.27.1
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==1.6.7
- sphinx-rtd-theme==1.2.1
- sphinxcontrib-jquery==2.0.0
- sphinxcontrib-serializinghtml==1.1.5
- sphinxcontrib-websupport==1.2.4
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/elasticsearch-py
| [
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_new_connection_is_not_marked_dead"
] | [] | [
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_already_failed_connection_has_longer_timeout",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_connection_is_forcibly_resurrected_when_no_live_ones_are_availible",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_connection_is_resurrected_after_its_timeout",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_connection_is_skipped_when_dead",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_dead_count_is_wiped_clean_for_connection_if_marked_live",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_dead_nodes_are_removed_from_active_connections",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_default_round_robin",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_disable_shuffling",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_dummy_cp_raises_exception_on_more_connections",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_force_resurrect_always_returns_a_connection",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_raises_exception_when_no_connections_defined",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_selectors_have_access_to_connection_opts",
"test_elasticsearch/test_connection_pool.py::TestConnectionPool::test_timeout_for_failed_connections_is_limitted"
] | [] | Apache License 2.0 | 5,693 | 260 | [
"elasticsearch/connection/base.py",
"elasticsearch/connection_pool.py"
] |
dwavesystems__dwave-cloud-client-335 | ee8aae92ac10be096ed1a34cec384afcc0c6e5a8 | 2019-10-30 14:18:51 | 0835a006e6a12bea18ff470d068ab0203be1d719 | diff --git a/dwave/cloud/client.py b/dwave/cloud/client.py
index b13cb8e..4055220 100644
--- a/dwave/cloud/client.py
+++ b/dwave/cloud/client.py
@@ -114,6 +114,9 @@ class Client(object):
connection_close (bool, default=False):
Force HTTP(S) connection close after each request.
+ headers (dict/str):
+ Additional HTTP headers.
+
Other Parameters:
Unrecognized keys (str):
All unrecognized keys are passed through to the appropriate client class constructor
@@ -183,7 +186,7 @@ class Client(object):
@classmethod
def from_config(cls, config_file=None, profile=None, client=None,
endpoint=None, token=None, solver=None, proxy=None,
- legacy_config_fallback=False, **kwargs):
+ headers=None, legacy_config_fallback=False, **kwargs):
"""Client factory method to instantiate a client instance from configuration.
Configuration values can be specified in multiple ways, ranked in the following
@@ -211,8 +214,9 @@ class Client(object):
file is not explicitly specified, detected on the system, or defined via
an environment variable.
- Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``, ``DWAVE_API_CLIENT``,
- ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``, ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``.
+ Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``,
+ ``DWAVE_API_CLIENT``, ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``,
+ ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``, ``DWAVE_API_HEADERS``.
Environment variables are described in :mod:`dwave.cloud.config`.
@@ -268,6 +272,10 @@ class Client(object):
username/password, port, scheme, etc. If undefined, client
uses the system-level proxy, if defined, or connects directly to the API.
+ headers (dict/str, default=None):
+ Newline-separated additional HTTP headers to include with each
+ API request, or a dictionary of (key, value) pairs.
+
legacy_config_fallback (bool, default=False):
If True and loading from a standard D-Wave Cloud Client configuration
file (``dwave.conf``) fails, tries loading a legacy configuration file (``~/.dwrc``).
@@ -292,7 +300,6 @@ class Client(object):
Config file parse failed.
Examples:
-
A variety of examples are given in :mod:`dwave.cloud.config`.
This example initializes :class:`~dwave.cloud.client.Client` from an
@@ -309,7 +316,8 @@ class Client(object):
# (`./dwave.conf`, `~/.config/dwave/dwave.conf`, etc)
config = load_config(
config_file=config_file, profile=profile, client=client,
- endpoint=endpoint, token=token, solver=solver, proxy=proxy)
+ endpoint=endpoint, token=token, solver=solver, proxy=proxy,
+ headers=headers)
logger.debug("Config loaded: %r", config)
# fallback to legacy `.dwrc` if key variables missing
@@ -319,8 +327,8 @@ class Client(object):
if not config.get('token'):
config = legacy_load_config(
- profile=profile, client=client,
- endpoint=endpoint, token=token, solver=solver, proxy=proxy)
+ profile=profile, client=client, endpoint=endpoint,
+ token=token, solver=solver, proxy=proxy, headers=headers)
logger.debug("Legacy config loaded: %r", config)
# manual override of other (client-custom) arguments
@@ -335,7 +343,7 @@ class Client(object):
def __init__(self, endpoint=None, token=None, solver=None, proxy=None,
permissive_ssl=False, request_timeout=60, polling_timeout=None,
- connection_close=False, **kwargs):
+ connection_close=False, headers=None, **kwargs):
"""To setup the connection a pipeline of queues/workers is constructed.
There are five interactions with the server the connection manages:
@@ -357,16 +365,18 @@ class Client(object):
logger.debug(
"Creating a client for (endpoint=%r, token=%r, solver=%r, proxy=%r, "
- "permissive_ssl=%r, request_timeout=%r, polling_timeout=%r, **kwargs=%r)",
- endpoint, token, solver, proxy, permissive_ssl, request_timeout, polling_timeout, kwargs
+ "permissive_ssl=%r, request_timeout=%r, polling_timeout=%r, "
+ "connection_close=%r, headers=%r, **kwargs=%r)",
+ endpoint, token, solver, proxy,
+ permissive_ssl, request_timeout, polling_timeout,
+ connection_close, headers, kwargs
)
+ # parse solver
if not solver:
solver_def = {}
-
elif isinstance(solver, collections.Mapping):
solver_def = solver
-
elif isinstance(solver, six.string_types):
# support features dict encoded as JSON in our config INI file
# TODO: push this decoding to the config module, once we switch to a
@@ -379,9 +389,27 @@ class Client(object):
# features dict (equality constraint on full solver name)
logger.debug("Invalid solver JSON, assuming string name: %r", solver)
solver_def = dict(name__eq=solver)
-
else:
raise ValueError("Expecting a features dictionary or a string name for 'solver'")
+ logger.debug("Parsed solver=%r", solver_def)
+
+ # parse headers
+ if not headers:
+ headers_dict = {}
+ elif isinstance(headers, collections.Mapping):
+ headers_dict = headers
+ elif isinstance(headers, six.string_types):
+ try:
+ # valid headers = "Field-1: value-1\nField-2: value-2"
+ headers_dict = {key.strip(): val.strip()
+ for key, val in [line.split(':')
+ for line in headers.strip().split('\n')]}
+ except Exception as e:
+ logger.debug("Invalid headers: %r", headers)
+ headers_dict = {}
+ else:
+ raise ValueError("HTTP headers expected in a dict, or a string")
+ logger.debug("Parsed headers=%r", headers_dict)
# Store connection/session parameters
self.endpoint = endpoint
@@ -392,6 +420,7 @@ class Client(object):
self.polling_timeout = parse_float(polling_timeout)
self.proxy = proxy
+ self.headers = headers_dict
self.permissive_ssl = permissive_ssl
self.connection_close = connection_close
@@ -456,6 +485,8 @@ class Client(object):
session = BaseUrlSession(base_url=endpoint)
session.mount('http://', TimeoutingHTTPAdapter(timeout=self.request_timeout))
session.mount('https://', TimeoutingHTTPAdapter(timeout=self.request_timeout))
+ if self.headers:
+ session.headers.update(self.headers)
session.headers.update({'X-Auth-Token': self.token,
'User-Agent': user_agent(__packagename__, __version__)})
session.proxies = {'http': self.proxy, 'https': self.proxy}
diff --git a/dwave/cloud/config.py b/dwave/cloud/config.py
index cf26496..7bdaf70 100644
--- a/dwave/cloud/config.py
+++ b/dwave/cloud/config.py
@@ -59,6 +59,8 @@ Environment variables:
``DWAVE_API_PROXY``: URL for proxy connections to D-Wave API.
+ ``DWAVE_API_HEADERS``: Optional additional HTTP headers.
+
Examples:
The following are typical examples of using :func:`~dwave.cloud.client.Client.from_config`
to create a configured client.
@@ -195,12 +197,14 @@ Examples:
{'client': u'sw',
'endpoint': u'https://url.of.some.dwavesystem.com/sapi',
'proxy': None,
+ 'headers': None,
'solver': 'EXAMPLE_2000Q_SYSTEM',
'token': u'ABC-123456789123456789123456789'}
>>> dc.config.load_config("./dwave_c.conf", profile='dw2000b', solver='Solver3') # doctest: +SKIP
{'client': u'qpu',
'endpoint': u'https://url.of.some.dwavesystem.com/sapi',
'proxy': None,
+ 'headers': None,
'solver': 'Solver3',
'token': u'DEF-987654321987654321987654321'}
@@ -540,18 +544,18 @@ def load_profile_from_files(filenames=None, profile=None):
The following example code loads profile values from parsing both these files,
by default loading the first profile encountered or an explicitly specified profile.
- >>> import dwave.cloud as dc
- >>> dc.config.load_profile_from_files(["./dwave_a.conf", "./dwave_b.conf"]) # doctest: +SKIP
- {'client': u'sw',
- 'endpoint': u'https://url.of.some.dwavesystem.com/sapi',
- 'solver': u'EXAMPLE_2000Q_SYSTEM_A',
- 'token': u'DEF-987654321987654321987654321'}
- >>> dc.config.load_profile_from_files(["./dwave_a.conf", "./dwave_b.conf"],
- ... profile='dw2000b') # doctest: +SKIP
- {'client': u'qpu',
- 'endpoint': u'https://url.of.some.other.dwavesystem.com/sapi',
- 'solver': u'EXAMPLE_2000Q_SYSTEM_B',
- 'token': u'ABC-123456789123456789123456789'}
+ >>> from dwave.cloud import config
+ >>> config.load_profile_from_files(["./dwave_a.conf", "./dwave_b.conf"]) # doctest: +SKIP
+ {'client': 'sw',
+ 'endpoint': 'https://url.of.some.dwavesystem.com/sapi',
+ 'solver': 'EXAMPLE_2000Q_SYSTEM_A',
+ 'token': 'DEF-987654321987654321987654321'}
+ >>> config.load_profile_from_files(["./dwave_a.conf", "./dwave_b.conf"],
+ ... profile='dw2000b') # doctest: +SKIP
+ {'client': 'qpu',
+ 'endpoint': 'https://url.of.some.other.dwavesystem.com/sapi',
+ 'solver': 'EXAMPLE_2000Q_SYSTEM_B',
+ 'token': 'ABC-123456789123456789123456789'}
"""
@@ -617,7 +621,8 @@ def get_default_config():
def load_config(config_file=None, profile=None, client=None,
- endpoint=None, token=None, solver=None, proxy=None):
+ endpoint=None, token=None, solver=None,
+ proxy=None, headers=None):
"""Load D-Wave Cloud Client configuration based on a configuration file.
Configuration values can be specified in multiple ways, ranked in the following
@@ -648,13 +653,13 @@ def load_config(config_file=None, profile=None, client=None,
file is not explicitly specified, detected on the system, or defined via
an environment variable.
- Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``, ``DWAVE_API_CLIENT``,
- ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``, ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``.
+ Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``,
+ ``DWAVE_API_CLIENT``, ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``,
+ ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``, ``DWAVE_API_HEADERS``.
Environment variables are described in :mod:`dwave.cloud.config`.
Args:
-
config_file (str/[str]/None/False/True, default=None):
Path to configuration file(s).
@@ -689,7 +694,7 @@ def load_config(config_file=None, profile=None, client=None,
token (str, default=None):
API authorization token.
- solver (str, default=None):
+ solver (dict/str, default=None):
:term:`solver` features, as a JSON-encoded dictionary of feature constraints,
the client should use. See :meth:`~dwave.cloud.client.Client.get_solvers` for
semantics of supported feature constraints.
@@ -705,13 +710,16 @@ def load_config(config_file=None, profile=None, client=None,
username/password, port, scheme, etc. If undefined, client
uses the system-level proxy, if defined, or connects directly to the API.
+ headers (dict/str, default=None):
+ Header lines to include in API calls, each line formatted as
+ ``Key: value``, or a parsed dictionary.
+
Returns:
dict:
- Mapping of configuration keys to values for the profile
- (section), as read from the configuration file and optionally overridden by
- environment values and specified keyword arguments.
- Always contains the `client`, `endpoint`, `token`, `solver`, and `proxy`
- keys.
+ Mapping of configuration keys to values for the profile (section),
+ as read from the configuration file and optionally overridden by
+ environment values and specified keyword arguments. Always contains
+ the `client`, `endpoint`, `token`, `solver`, and `proxy` keys.
Raises:
:exc:`ValueError`:
@@ -727,16 +735,17 @@ def load_config(config_file=None, profile=None, client=None,
This example loads the configuration from an auto-detected configuration file
in the home directory of a Windows system user.
- >>> import dwave.cloud as dc
- >>> dc.config.load_config()
- {'client': u'qpu',
- 'endpoint': u'https://url.of.some.dwavesystem.com/sapi',
+ >>> from dwave.cloud import config
+ >>> config.load_config()
+ {'client': 'qpu',
+ 'endpoint': 'https://url.of.some.dwavesystem.com/sapi',
'proxy': None,
- 'solver': u'EXAMPLE_2000Q_SYSTEM_A',
- 'token': u'DEF-987654321987654321987654321'}
+ 'solver': 'EXAMPLE_2000Q_SYSTEM_A',
+ 'token': 'DEF-987654321987654321987654321',
+ 'headers': None}
>>> See which configuration file was loaded
- >>> dc.config.get_configfile_paths()
- [u'C:\\Users\\jane\\AppData\\Local\\dwavesystem\\dwave\\dwave.conf']
+ >>> config.get_configfile_paths()
+ ['C:\\Users\\jane\\AppData\\Local\\dwavesystem\\dwave\\dwave.conf']
Additional examples are given in :mod:`dwave.cloud.config`.
@@ -774,6 +783,7 @@ def load_config(config_file=None, profile=None, client=None,
section['token'] = token or os.getenv("DWAVE_API_TOKEN", section.get('token'))
section['solver'] = solver or os.getenv("DWAVE_API_SOLVER", section.get('solver'))
section['proxy'] = proxy or os.getenv("DWAVE_API_PROXY", section.get('proxy'))
+ section['headers'] = headers or os.getenv("DWAVE_API_HEADERS", section.get('headers'))
return section
| Add support for custom headers
It's useful during development and testing to be able to pass-in a custom HTTP header in every API call.
Add support for `headers` config parameter (file, constructor) and `DWAVE_API_HEADERS` environment variable. | dwavesystems/dwave-cloud-client | diff --git a/tests/test_client.py b/tests/test_client.py
index 009e167..1b5145c 100644
--- a/tests/test_client.py
+++ b/tests/test_client.py
@@ -230,6 +230,35 @@ class ClientFactory(unittest.TestCase):
with dwave.cloud.Client.from_config(solver='solver') as client:
self.assertEqual(client.default_solver, {"name__eq": "solver"})
+ def test_headers_from_config(self):
+ headers_dict = {"key-1": "value-1", "key-2": "value-2"}
+ headers_str = """ key-1:value-1
+ key-2: value-2
+ """
+ conf = dict(token='token', headers=headers_str)
+
+ with mock.patch("dwave.cloud.client.load_config", lambda **kwargs: conf):
+ with dwave.cloud.Client.from_config() as client:
+ self.assertDictEqual(client.headers, headers_dict)
+
+ def test_headers_from_kwargs(self):
+ headers_dict = {"key-1": "value-1", "key-2": "value-2"}
+ headers_str = "key-2:value-2\nkey-1:value-1"
+ conf = dict(token='token')
+
+ def load_config(**kwargs):
+ return merge(kwargs, conf, op=lambda a, b: a or b)
+
+ # headers as dict
+ with mock.patch("dwave.cloud.client.load_config", load_config):
+ with dwave.cloud.Client.from_config(headers=headers_dict) as client:
+ self.assertDictEqual(client.headers, headers_dict)
+
+ # headers as str
+ with mock.patch("dwave.cloud.client.load_config", load_config):
+ with dwave.cloud.Client.from_config(headers=headers_str) as client:
+ self.assertDictEqual(client.headers, headers_dict)
+
class FeatureBasedSolverSelection(unittest.TestCase):
"""Test Client.get_solvers(**filters)."""
diff --git a/tests/test_config.py b/tests/test_config.py
index 20e4914..2b34bea 100644
--- a/tests/test_config.py
+++ b/tests/test_config.py
@@ -47,6 +47,8 @@ class TestConfig(unittest.TestCase):
endpoint = https://url.to.alpha/api
proxy = http://user:[email protected]:8080/
token = alpha-token
+ headers = key-1:value-1
+ key-2: value-2
"""
def parse_config_string(self, text):
@@ -127,6 +129,8 @@ class TestConfig(unittest.TestCase):
self.assertEqual(config['endpoint'], "https://url.to.alpha/api")
# default values are inherited
self.assertEqual(config['client'], "qpu")
+ # multi-line values are read
+ self.assertEqual(config['headers'], "key-1:value-1\nkey-2: value-2")
def _load_config_from_files(self, asked, provided=None, data=None):
self.assertEqual(asked, provided)
@@ -210,6 +214,7 @@ class TestConfig(unittest.TestCase):
self.assertEqual(load_config(client='manual')['client'], 'manual')
self.assertEqual(load_config(solver='manual')['solver'], 'manual')
self.assertEqual(load_config(proxy='manual')['proxy'], 'manual')
+ self.assertEqual(load_config(headers='headers')['headers'], 'headers')
def test_config_load__profile_arg_nonexisting(self):
"""load_config should fail if the profile specified in kwargs or env in
@@ -332,3 +337,25 @@ class TestConfig(unittest.TestCase):
m.assert_has_calls([mock.call('file1', 'r'), mock.call('file2', 'r')])
self.assertEqual(section['endpoint'], 'alpha')
self.assertEqual(section['solver'], 'DW_2000Q_2')
+
+ def test_config_load_env_override(self):
+ with mock.patch("dwave.cloud.config.load_config_from_files",
+ partial(self._load_config_from_files, data=u"", provided=['myfile'])):
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_CLIENT': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['client'], 'test')
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_ENDPOINT': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['endpoint'], 'test')
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_TOKEN': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['token'], 'test')
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_SOLVER': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['solver'], 'test')
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_PROXY': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['proxy'], 'test')
+
+ with mock.patch.dict(os.environ, {'DWAVE_API_HEADERS': 'test'}):
+ self.assertEqual(load_config(config_file='myfile')['headers'], 'test')
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
dimod==0.10.10
-e git+https://github.com/dwavesystems/dwave-cloud-client.git@ee8aae92ac10be096ed1a34cec384afcc0c6e5a8#egg=dwave_cloud_client
homebase==1.0.1
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
mock==5.2.0
numpy==1.19.5
packaging==21.3
plucky==0.4.3
pluggy==1.0.0
py==1.11.0
pyparsing==2.4.7
PySocks==1.7.1
pytest==7.0.1
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
requests==2.27.1
requests-mock==1.12.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: dwave-cloud-client
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- dimod==0.10.10
- homebase==1.0.1
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- mock==5.2.0
- numpy==1.19.5
- packaging==21.3
- plucky==0.4.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==2.4.7
- pysocks==1.7.1
- pytest==7.0.1
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- requests==2.27.1
- requests-mock==1.12.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/dwave-cloud-client
| [
"tests/test_client.py::ClientFactory::test_headers_from_config",
"tests/test_client.py::ClientFactory::test_headers_from_kwargs",
"tests/test_config.py::TestConfig::test_config_load_configfile_env_profile_env_key_arg",
"tests/test_config.py::TestConfig::test_config_load_env_override"
] | [] | [
"tests/test_client.py::ClientFactory::test_client_type",
"tests/test_client.py::ClientFactory::test_custom_kwargs",
"tests/test_client.py::ClientFactory::test_custom_kwargs_overrides_config",
"tests/test_client.py::ClientFactory::test_default",
"tests/test_client.py::ClientFactory::test_legacy_config_load_fallback",
"tests/test_client.py::ClientFactory::test_solver_features_from_config",
"tests/test_client.py::ClientFactory::test_solver_features_kwargs_override_config",
"tests/test_client.py::ClientFactory::test_solver_name_from_config",
"tests/test_client.py::ClientFactory::test_solver_name_overrides_config_features",
"tests/test_client.py::FeatureBasedSolverSelection::test_anneal_schedule",
"tests/test_client.py::FeatureBasedSolverSelection::test_availability_combo",
"tests/test_client.py::FeatureBasedSolverSelection::test_default",
"tests/test_client.py::FeatureBasedSolverSelection::test_lower_noise_derived_property",
"tests/test_client.py::FeatureBasedSolverSelection::test_membership_ops",
"tests/test_client.py::FeatureBasedSolverSelection::test_name",
"tests/test_client.py::FeatureBasedSolverSelection::test_nested_properties_intermediate_key_lookup",
"tests/test_client.py::FeatureBasedSolverSelection::test_nested_properties_leaf_lookup",
"tests/test_client.py::FeatureBasedSolverSelection::test_num_qubits",
"tests/test_client.py::FeatureBasedSolverSelection::test_online",
"tests/test_client.py::FeatureBasedSolverSelection::test_order_by_callable",
"tests/test_client.py::FeatureBasedSolverSelection::test_order_by_edgecases",
"tests/test_client.py::FeatureBasedSolverSelection::test_order_by_string",
"tests/test_client.py::FeatureBasedSolverSelection::test_parameter_availability_check",
"tests/test_client.py::FeatureBasedSolverSelection::test_property_availability_check",
"tests/test_client.py::FeatureBasedSolverSelection::test_qpu_software",
"tests/test_client.py::FeatureBasedSolverSelection::test_range_boolean_combo",
"tests/test_client.py::FeatureBasedSolverSelection::test_range_ops",
"tests/test_client.py::FeatureBasedSolverSelection::test_regex",
"tests/test_client.py::FeatureBasedSolverSelection::test_relational_ops",
"tests/test_client.py::FeatureBasedSolverSelection::test_set_ops",
"tests/test_client.py::FeatureBasedSolverSelection::test_solvers_deprecation",
"tests/test_config.py::TestConfig::test_config_file_detection_cwd",
"tests/test_config.py::TestConfig::test_config_file_detection_nonexisting",
"tests/test_config.py::TestConfig::test_config_file_detection_system",
"tests/test_config.py::TestConfig::test_config_file_detection_user",
"tests/test_config.py::TestConfig::test_config_load__profile_arg_nonexisting",
"tests/test_config.py::TestConfig::test_config_load__profile_first_section",
"tests/test_config.py::TestConfig::test_config_load__profile_from_defaults",
"tests/test_config.py::TestConfig::test_config_load_configfile_arg",
"tests/test_config.py::TestConfig::test_config_load_configfile_arg_profile_default",
"tests/test_config.py::TestConfig::test_config_load_configfile_arg_profile_default_nonexisting",
"tests/test_config.py::TestConfig::test_config_load_configfile_detect",
"tests/test_config.py::TestConfig::test_config_load_configfile_detect_profile_env",
"tests/test_config.py::TestConfig::test_config_load_configfile_env",
"tests/test_config.py::TestConfig::test_config_load_configfile_env_profile_env",
"tests/test_config.py::TestConfig::test_config_load_force_autodetection",
"tests/test_config.py::TestConfig::test_config_load_from_file",
"tests/test_config.py::TestConfig::test_config_load_from_file__invalid_format__duplicate_sections",
"tests/test_config.py::TestConfig::test_config_load_multiple_autodetected_configfiles",
"tests/test_config.py::TestConfig::test_config_load_multiple_explicit_configfiles",
"tests/test_config.py::TestConfig::test_config_load_skip_configfiles",
"tests/test_config.py::TestConfig::test_invalid_filename_given",
"tests/test_config.py::TestConfig::test_no_config_detected"
] | [] | Apache License 2.0 | 5,700 | 3,695 | [
"dwave/cloud/client.py",
"dwave/cloud/config.py"
] |
|
mvondracek__PA193_mnemonic_Slytherin-64 | a23ee44909ca5f930478e0d7ead02b4bae6a0715 | 2019-10-31 05:20:54 | a23ee44909ca5f930478e0d7ead02b4bae6a0715 | diff --git a/pa193mnemonicslytherin/mnemoniccli.py b/pa193mnemonicslytherin/mnemoniccli.py
index 4d27521..5774fa6 100644
--- a/pa193mnemonicslytherin/mnemoniccli.py
+++ b/pa193mnemonicslytherin/mnemoniccli.py
@@ -19,6 +19,7 @@ from binascii import unhexlify, hexlify, Error
from enum import Enum, unique
from pprint import saferepr
from typing import Sequence
+from unicodedata import normalize
from pa193mnemonicslytherin import Entropy, Mnemonic, Seed
from pa193mnemonicslytherin import generate, recover, verify
@@ -79,6 +80,10 @@ class Config(object):
def write_mode(self):
return 'wb' if self is Config.Format.BINARY else 'w'
+ @property
+ def encoding(self):
+ return 'utf-8' if self is Config.Format.TEXT_HEXADECIMAL else None
+
PROGRAM_NAME = 'mnemoniccli'
PROGRAM_DESCRIPTION = 'BIP39 Mnemonic Phrase Generator and Verifier'
LOGGING_LEVELS_DICT = {'debug': logging.DEBUG,
@@ -122,6 +127,12 @@ class Config(object):
if len(password) > MAX_SEED_PASSWORD_LENGTH:
raise argparse.ArgumentTypeError("password is longer than {} characters".format(
MAX_SEED_PASSWORD_LENGTH))
+ try:
+ # to raise UnicodeError for invalid UTF-8
+ password.encode('utf-8')
+ password = normalize('NFKD', password)
+ except UnicodeError as e:
+ raise argparse.ArgumentTypeError("password is not valid UTF-8: {}".format(e)) from e
return password
parser = argparse.ArgumentParser(
prog=cls.PROGRAM_NAME,
@@ -158,7 +169,7 @@ class Config(object):
help='select input and output format (default: %(default)s)'
)
parser.add_argument('-p', '--password',
- help='password for protection of seed',
+ help='password for protection of seed (UTF-8)',
default='',
type=valid_password
)
@@ -238,9 +249,8 @@ def cli_entry_point(argv=sys.argv):
def action_generate(config: Config) -> ExitCode:
- # TODO Check file size before reading?
try:
- with open(config.entropy_filepath, config.format.read_mode) as file:
+ with open(config.entropy_filepath, config.format.read_mode, encoding=config.format.encoding) as file:
entropy = file.read() # type: typing.Union[bytes, str]
except FileNotFoundError as e:
logger.critical(str(e))
@@ -263,16 +273,11 @@ def action_generate(config: Config) -> ExitCode:
logger.critical(str(e))
print(str(e), file=sys.stderr)
return ExitCode.EX_DATAERR
- try:
- mnemonic, seed = generate(entropy, config.password)
- except UnicodeError as e:
- logger.critical(str(e))
- print(str(e), file=sys.stderr)
- return ExitCode.EX_DATAERR
- with open(config.mnemonic_filepath, 'w') as file:
+ mnemonic, seed = generate(entropy, config.password)
+ with open(config.mnemonic_filepath, 'w', encoding='utf-8') as file:
file.write(mnemonic)
logger.info('Mnemonic written to {}.'.format(config.mnemonic_filepath))
- with open(config.seed_filepath, config.format.write_mode) as file:
+ with open(config.seed_filepath, config.format.write_mode, encoding=config.format.encoding) as file:
if config.format is Config.Format.TEXT_HEXADECIMAL:
seed = str(hexlify(seed), 'ascii')
file.write(seed)
@@ -282,7 +287,6 @@ def action_generate(config: Config) -> ExitCode:
def action_recover(config: Config) -> ExitCode:
- # TODO Check file size before reading?
try:
with open(config.mnemonic_filepath, 'r', encoding='utf-8') as file:
mnemonic = file.read() # type: str
@@ -300,18 +304,13 @@ def action_recover(config: Config) -> ExitCode:
logger.critical(str(e))
print(str(e), file=sys.stderr)
return ExitCode.EX_DATAERR
- try:
- entropy, seed = recover(mnemonic, config.password)
- except UnicodeError as e:
- logger.critical(str(e))
- print(str(e), file=sys.stderr)
- return ExitCode.EX_DATAERR
- with open(config.entropy_filepath, config.format.write_mode) as file:
+ entropy, seed = recover(mnemonic, config.password)
+ with open(config.entropy_filepath, config.format.write_mode, encoding=config.format.encoding) as file:
if config.format is Config.Format.TEXT_HEXADECIMAL:
entropy = str(hexlify(entropy), 'ascii')
file.write(entropy)
logger.info('Entropy written to {}.'.format(config.entropy_filepath))
- with open(config.seed_filepath, config.format.write_mode) as file:
+ with open(config.seed_filepath, config.format.write_mode, encoding=config.format.encoding) as file:
if config.format is Config.Format.TEXT_HEXADECIMAL:
seed = str(hexlify(seed), 'ascii')
file.write(seed)
@@ -321,7 +320,6 @@ def action_recover(config: Config) -> ExitCode:
def action_verify(config: Config) -> ExitCode:
- # TODO Check file size before reading?
try:
with open(config.mnemonic_filepath, 'r', encoding='utf-8') as file:
mnemonic = file.read() # type: str
@@ -339,9 +337,8 @@ def action_verify(config: Config) -> ExitCode:
logger.critical(str(e))
print(str(e), file=sys.stderr)
return ExitCode.EX_DATAERR
- # TODO Check file size before reading?
try:
- with open(config.seed_filepath, config.format.read_mode) as file:
+ with open(config.seed_filepath, config.format.read_mode, encoding=config.format.encoding) as file:
seed = file.read() # type: typing.Union[bytes, str]
except FileNotFoundError as e:
logger.critical(str(e))
@@ -364,12 +361,7 @@ def action_verify(config: Config) -> ExitCode:
logger.critical(str(e))
print(str(e), file=sys.stderr)
return ExitCode.EX_DATAERR
- try:
- match = verify(mnemonic, seed, config.password)
- except UnicodeError as e:
- logger.critical(str(e))
- print(str(e), file=sys.stderr)
- return ExitCode.EX_DATAERR
+ match = verify(mnemonic, seed, config.password)
if not match:
msg = 'Seeds do not match.'
logger.info(msg)
@@ -382,7 +374,6 @@ def action_verify(config: Config) -> ExitCode:
def main(argv) -> ExitCode:
- # TODO check for errors related to file IO
logging.captureWarnings(True)
warnings.simplefilter('always', ResourceWarning)
@@ -397,7 +388,7 @@ def main(argv) -> ExitCode:
logging.disable(logging.CRITICAL)
logger.debug('Config parsed from args.')
- # region # TODO What types of errors/exceptions can happen here?
+ # region #
exitcode = ExitCode.EX_OK
if config.generate:
exitcode = action_generate(config)
| Report invalid UTF-8 sequence in input mnemonic as `EX_DATAERR = 65`
~~~
'utf-8' codec can't decode byte 0xca in position 151: invalid continuation byte
'utf-8' codec can't decode byte 0xa5 in position 85: invalid start byte
'utf-8' codec can't decode byte 0x96 in position 89: invalid start byte
'utf-8' codec can't decode byte 0xc2 in position 39: invalid continuation byte
'utf-8' codec can't decode byte 0xdb in position 111: invalid continuation byte
'utf-8' codec can't decode byte 0xc6 in position 352: invalid continuation byte
'utf-8' codec can't decode byte 0xf3 in position 56: invalid continuation byte
'utf-8' codec can't decode byte 0xc0 in position 58: invalid start byte
'utf-8' codec can't decode byte 0xd0 in position 56: invalid continuation byte
'utf-8' codec can't decode byte 0x84 in position 123: invalid start byte
'utf-8' codec can't decode byte 0xb5 in position 140: invalid start byte
'utf-8' codec can't decode byte 0xfe in position 138: invalid start byte
'utf-8' codec can't decode byte 0x88 in position 237: invalid start byte
'utf-8' codec can't decode byte 0xc0 in position 175: invalid start byte
'utf-8' codec can't decode byte 0xfb in position 132: invalid start byte
'utf-8' codec can't decode byte 0xed in position 71: invalid continuation byte
'utf-8' codec can't decode byte 0xf7 in position 53: invalid start byte
'utf-8' codec can't decode byte 0xfe in position 197: invalid start byte
'utf-8' codec can't decode bytes in position 2-4: invalid continuation byte
'utf-8' codec can't decode byte 0xbe in position 66: invalid start byte
'utf-8' codec can't decode byte 0xe4 in position 7: invalid continuation byte
~~~
Is currently reported as `UNKNOWN_FAILURE = 1`. Would be better to report it as `EX_DATAERR = 65` from `mnemoniccli.ExitCode.EX_DATAERR`.
Already mentioned in https://github.com/mvondracek/PA193_mnemonic_Slytherin/issues/12#issuecomment-543391802.
Errors with UTF-8 apply to dictionary, too. (https://github.com/mvondracek/PA193_mnemonic_Slytherin/pull/21#discussion_r336257567) | mvondracek/PA193_mnemonic_Slytherin | diff --git a/pa193mnemonicslytherin/test_mnemonic.py b/pa193mnemonicslytherin/test_mnemonic.py
index b48ff70..bfbe8e8 100644
--- a/pa193mnemonicslytherin/test_mnemonic.py
+++ b/pa193mnemonicslytherin/test_mnemonic.py
@@ -182,6 +182,8 @@ VALID_SEED_TREZOR = Seed(unhexlify(TREZOR_TEST_VECTORS['english'][0][2]))
VALID_SEED_HEX_TREZOR = TREZOR_TEST_VECTORS['english'][0][2]
VALID_PASSWORD_TREZOR = TREZOR_PASSWORD
+INVALID_MNEMONIC_PHRASE_INVALID_UTF8 = "mn3\n\udcd6 " * 12
+INVALID_PASSWORD_INVALID_UTF8 = "icpa\u202e\U000e0ec1\udcaassword1"
def extract_checksum(mnemonic_phrase: str, dictionary_name: str = ENGLISH_DICTIONARY_NAME) -> int:
"""Extract checksum based on words from the mnemonic phrase and given dictionary.
@@ -300,11 +302,9 @@ class TestInternalTestHelpers(TestCase):
r'current dictionary'):
extract_checksum(test_input)
- def test_extract_checksum_non_utf8_mnemonic(self):
- words = ["mn3\n\udcd6"] * 12
- mnemonic_non_utf8 = ' '.join(words)
+ def test_extract_checksum_invalid_mnemonic_invalid_utf8(self):
with self.assertRaises(UnicodeError):
- extract_checksum(mnemonic_non_utf8)
+ extract_checksum(INVALID_MNEMONIC_PHRASE_INVALID_UTF8)
def test_extract_checksum_invalid_dictionary_name(self):
for test_input in Test_DictionaryAccess.INVALID_DICTIONARY_NAMES:
@@ -341,7 +341,6 @@ class TestPublicFunctions(TestCase):
"""Tests for public functions of mnemonic module.
Tests for `generate`, `recover`, and `verify` are similar, but we want to
test each function separately.
- TODO add testing for incorrect inputs
"""
TESTING_TYPES = [
None,
@@ -373,10 +372,9 @@ class TestPublicFunctions(TestCase):
with self.assertRaises(ValueError):
generate(VALID_ENTROPY_TREZOR, password)
- def test_generate_invalid_encoding(self):
- password_non_utf8 = "icpa\u202e\U000e0ec1\udcaassword1"
+ def test_generate_invalid_password_invalid_utf8(self):
with self.assertRaises(UnicodeError):
- generate(VALID_ENTROPY_TREZOR, password_non_utf8)
+ generate(VALID_ENTROPY_TREZOR, INVALID_PASSWORD_INVALID_UTF8)
def test_recover(self):
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -399,10 +397,9 @@ class TestPublicFunctions(TestCase):
with self.assertRaises(ValueError):
recover(VALID_MNEMONIC_TREZOR, password)
- def test_recover_invalid_encoding(self):
- password_non_utf8 = "icpa\u202e\U000e0ec1\udcaassword1"
+ def test_recover_invalid_password_invalid_utf8(self):
with self.assertRaises(UnicodeError):
- recover(VALID_MNEMONIC_TREZOR, password_non_utf8)
+ recover(VALID_MNEMONIC_TREZOR, INVALID_PASSWORD_INVALID_UTF8)
def test_verify(self):
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -427,13 +424,11 @@ class TestPublicFunctions(TestCase):
with self.assertRaises(ValueError):
verify(VALID_MNEMONIC_TREZOR, VALID_SEED_TREZOR, password)
- def test_verify_invalid_encoding(self):
- non_utf8 = "icpa\u202e\U000e0ec1\udcaassword1"
+ def test_verify_invalid_password_invalid_utf8(self):
with self.assertRaises(UnicodeError):
- verify(VALID_MNEMONIC_TREZOR, VALID_SEED_TREZOR, non_utf8)
+ verify(VALID_MNEMONIC_TREZOR, VALID_SEED_TREZOR, INVALID_PASSWORD_INVALID_UTF8)
-# TODO add more tests (different from Trezor vector)
class TestMnemonic(TestCase):
"""Tests Mnemonic"""
@@ -492,12 +487,10 @@ class TestMnemonic(TestCase):
with self.assertRaises(ValueError):
Mnemonic('a' * 1024 * 1024) # 1 MB
- def test___init___invalid_encoding(self):
+ def test___init___invalid_utf8(self):
"""Input string isn't UTF-8 encoded."""
- words = ["mn3\n\udcd6"] * 12
- mnemonic_non_utf8 = ' '.join(words)
with self.assertRaises(UnicodeError):
- Mnemonic(mnemonic_non_utf8)
+ Mnemonic(INVALID_MNEMONIC_PHRASE_INVALID_UTF8)
def test_to_seed(self):
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -518,10 +511,9 @@ class TestMnemonic(TestCase):
with self.assertRaises(ValueError):
VALID_MNEMONIC_TREZOR.to_seed(password)
- def test_to_seed_password_invalid_encoding(self):
- password_non_utf8 = "icpa\u202e\U000e0ec1\udcaassword1"
+ def test_to_seed_invalid_password_invalid_utf8(self):
with self.assertRaises(UnicodeError):
- VALID_MNEMONIC_TREZOR.to_seed(password_non_utf8)
+ VALID_MNEMONIC_TREZOR.to_seed(INVALID_PASSWORD_INVALID_UTF8)
def test_to_entropy(self):
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -696,4 +688,4 @@ class Test_DictionaryAccess(TestCase):
b'\xff\xff\xff\n' * self.VALID_DICTIONARY_LINE_COUNT)
with patch.object(pkg_resources, 'resource_stream', return_value=dictionary_mock):
with self.assertRaises(UnicodeError):
- _DictionaryAccess()
\ No newline at end of file
+ _DictionaryAccess()
diff --git a/pa193mnemonicslytherin/test_mnemoniccli.py b/pa193mnemonicslytherin/test_mnemoniccli.py
index 1dd8432..68fa81b 100644
--- a/pa193mnemonicslytherin/test_mnemoniccli.py
+++ b/pa193mnemonicslytherin/test_mnemoniccli.py
@@ -21,7 +21,7 @@ from typing import Optional, List, Tuple, Union
from pa193mnemonicslytherin.mnemonic import MAX_SEED_PASSWORD_LENGTH, SEED_LEN
from pa193mnemonicslytherin.mnemoniccli import ExitCode, cli_entry_point, Config
from pa193mnemonicslytherin.test_mnemonic import TREZOR_TEST_VECTORS, TREZOR_PASSWORD, \
- VALID_SEED_HEX_TREZOR, VALID_MNEMONIC_PHRASE_TREZOR
+ VALID_SEED_HEX_TREZOR, VALID_MNEMONIC_PHRASE_TREZOR, INVALID_PASSWORD_INVALID_UTF8
def get_invalid_entropies() -> List[Tuple[Union[str, bytes], Config.Format, Optional[str]]]:
@@ -45,16 +45,17 @@ def get_invalid_entropies() -> List[Tuple[Union[str, bytes], Config.Format, Opti
invalid_entropies.append((str(hexlify(entropy_byte * entropy_bytes_length), 'ascii'),
Config.Format.TEXT_HEXADECIMAL, None))
# endregion
- # TODO invalid characters in hex string
+
# Cases: non-Unicode, non-ascii, non-hex, odd-length
# odd-length should be covered by the previous region?
# for non-UTF-8 be a separate test function
entropy_non_ascii = "1122334455667ЫЫ899ЩЩBBCCDDEEFF00"
entropy_non_hex = "Z" * 16
- invalid_entropies.extend([(entropy_non_ascii,
- Config.Format.TEXT_HEXADECIMAL, None),
- (entropy_non_hex,
- Config.Format.TEXT_HEXADECIMAL, None)])
+ entropy_odd_length = "aaF"
+ invalid_entropies.extend([(entropy_non_ascii, Config.Format.TEXT_HEXADECIMAL, None),
+ (entropy_non_hex, Config.Format.TEXT_HEXADECIMAL, None),
+ (entropy_odd_length, Config.Format.TEXT_HEXADECIMAL, None),
+ ])
return invalid_entropies
@@ -64,18 +65,15 @@ def get_invalid_mnemonics() -> List[Tuple[str, Optional[str]]]:
and second is optional error message to be checked on
program's stderr.
"""
- invalid_mnemonics = [('this is invalid mnemonic', None)] # TODO gather invalid mnemonics from tests
- # TODO invalid UTF-8 sequences
+ invalid_mnemonics = [('this is invalid mnemonic', None)]
return invalid_mnemonics
-def get_non_unicode_mnemonics() -> List[Tuple[bytes, Optional[str]]]:
+def get_invalid_mnemonics_invalid_utf8() -> List[Tuple[bytes, Optional[str]]]:
"""
:return: non-Unicode mnemonics as byte string, which should be written to file in a binary mode
"""
- words = [b"\xff\xff\xff\xff"] * 12
- mnemonic = b'\x20'.join(words)
- non_unicode_mnemonics = [(mnemonic, None)]
+ non_unicode_mnemonics = [(b"\xff\xff\xff\xff " * 12, None)]
return non_unicode_mnemonics
@@ -94,29 +92,17 @@ def get_invalid_seeds() -> List[Tuple[Union[str, bytes], Config.Format, Optional
invalid_seeds.append((str(hexlify(seed_byte * seed_bytes_length), 'ascii'),
Config.Format.TEXT_HEXADECIMAL, None))
# endregion
- # TODO invalid characters in hex string
- # Cases: non-Unicode, non-ASCII, non-hex
- # non-Unicode test be in a separate function
+ # Cases: non-ASCII, non-hex
seed_non_ascii = "Ы" * SEED_LEN
seed_non_hex = "Z" * SEED_LEN
+ seed_odd_length = "aaF"
invalid_seeds.extend([(seed_non_ascii, Config.Format.TEXT_HEXADECIMAL, None),
- (seed_non_hex, Config.Format.TEXT_HEXADECIMAL, None)])
+ (seed_non_hex, Config.Format.TEXT_HEXADECIMAL, None),
+ (seed_odd_length, Config.Format.TEXT_HEXADECIMAL, None),
+ ])
return invalid_seeds
-def get_invalid_passwords() -> List[Tuple[str, Optional[str]]]:
- """
- :return: List of invalid examples as tuples, where first is invalid input,
- and second is optional error message to be checked on
- program's stderr.
- """
- psw_non_utf8 = "icpa\u202e\U000e0ec1\udcaassword1"
- invalid_passwords = [(psw_non_utf8, None)]
- # Cases: invalid UTF-8 sequences, too long passwords
- # TODO: long passwords raise argparse.ArgumentTypeError, need separate test
- return invalid_passwords
-
-
class TestMain(unittest.TestCase):
"""Integration tests for CLI tool."""
TIMEOUT = 10 # seconds until we terminate the program
@@ -137,14 +123,14 @@ class TestMain(unittest.TestCase):
:param stderr_check: String with which `stderr` should be compared, `None` if `stderr` should not be empty.
"""
cli = self.execute_cli(args)
- if stdout_check is not None:
- self.assertEqual(stdout_check, cli.stdout)
- else:
- self.assertNotEqual('', cli.stdout)
if stderr_check is not None:
self.assertEqual(stderr_check, cli.stderr)
else:
self.assertNotEqual('', cli.stderr)
+ if stdout_check is not None:
+ self.assertEqual(stdout_check, cli.stdout)
+ else:
+ self.assertNotEqual('', cli.stdout)
self.assertEqual(exitcode.value, cli.returncode)
def assert_program_entry_point(self, args: List[str], exitcode: ExitCode,
@@ -202,6 +188,7 @@ class TestMain(unittest.TestCase):
self.assert_program_error([self.SCRIPT, '-s'], ExitCode.ARGUMENTS)
self.assert_program_error([self.SCRIPT, '-p'], ExitCode.ARGUMENTS)
self.assert_program_error([self.SCRIPT, '-p', 'a' * (MAX_SEED_PASSWORD_LENGTH + 1)], ExitCode.ARGUMENTS)
+ self.assert_program_error([self.SCRIPT, '-p', INVALID_PASSWORD_INVALID_UTF8], ExitCode.ARGUMENTS)
self.assert_program_error([self.SCRIPT, '-v', '-m'], ExitCode.ARGUMENTS)
self.assert_program_error([self.SCRIPT, '-v', '-s'], ExitCode.ARGUMENTS)
self.assert_program_error([self.SCRIPT, '-g', '-r', '-v'], ExitCode.ARGUMENTS)
@@ -253,7 +240,7 @@ class TestMain(unittest.TestCase):
[self.SCRIPT, '-v', '-m', non_existing_filepath, '-s', non_existing_filepath],
ExitCode.EX_NOINPUT)
- with open(os.path.join(tmpdir, '__this_file_exists__.txt'), 'w') as f:
+ with open(os.path.join(tmpdir, '__this_file_exists__.txt'), 'w', encoding='utf-8') as f:
f.write('foo bar')
self.assert_program_error([self.SCRIPT, '-v', '-m', non_existing_filepath, '-s', f.name],
ExitCode.EX_NOINPUT)
@@ -266,7 +253,7 @@ class TestMain(unittest.TestCase):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(entropy_path, io_format.write_mode) as entropy_file:
+ with open(entropy_path, io_format.write_mode, encoding=io_format.encoding) as entropy_file:
entropy_file.write(
unhexlify(test_vector[0]) if io_format is Config.Format.BINARY else test_vector[0])
self.assert_program([self.SCRIPT, '-g', '--format', io_format.value, '-p', TREZOR_PASSWORD,
@@ -276,12 +263,12 @@ class TestMain(unittest.TestCase):
stdout_check='[DONE] Generate, mnemonic in {}, seed in {}.\n'.format(
mnemonic_path, seed_path),
stderr_check='')
- with open(seed_path, io_format.read_mode) as seed_file:
+ with open(seed_path, io_format.read_mode, encoding=io_format.encoding) as seed_file:
content = seed_file.read()
if io_format is Config.Format.BINARY:
seed = str(hexlify(content), 'ascii')
self.assertEqual(test_vector[2], seed)
- with open(mnemonic_path, 'r') as mnemonic_file:
+ with open(mnemonic_path, 'r', encoding='utf-8') as mnemonic_file:
self.assertEqual(test_vector[1], mnemonic_file.read())
def test_generate_invalid_entropy(self):
@@ -291,7 +278,7 @@ class TestMain(unittest.TestCase):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(entropy_path, io_format.write_mode) as entropy_file:
+ with open(entropy_path, io_format.write_mode, encoding=io_format.encoding) as entropy_file:
entropy_file.write(entropy)
self.assert_program([self.SCRIPT, '-g', '--format', io_format.value,
'-e', entropy_path,
@@ -300,11 +287,8 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=stderr)
- @unittest.skip("cannot generate non-UTF-8 entropy file")
- def test_generate_non_unicode_entropy(self):
+ def test_generate_invalid_entropy_invalid_utf8(self):
entropy = b'\x86' * 16
- # currently not working: exception occurs after hexlify
- # it should occur just after reading attempt
with TemporaryDirectory() as tmpdir:
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
@@ -320,26 +304,6 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=None)
- def test_generate_invalid_password(self):
- entropy = TREZOR_TEST_VECTORS['english'][0][0]
- with TemporaryDirectory() as tmpdir:
- for password, stderr in get_invalid_passwords():
- with self.subTest(password=password):
- seed_path = os.path.join(tmpdir, '__seed__')
- mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(entropy_path, 'w') as entropy_file:
- entropy_file.write(entropy)
- self.assert_program([self.SCRIPT, '-g',
- '-e', entropy_path,
- '-m', mnemonic_path,
- '-s', seed_path,
- '--format', Config.Format.TEXT_HEXADECIMAL.value,
- '-p', password],
- ExitCode.EX_DATAERR,
- stdout_check='',
- stderr_check=stderr)
-
def test_recover(self):
with TemporaryDirectory() as tmpdir:
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -348,7 +312,7 @@ class TestMain(unittest.TestCase):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(test_vector[1])
self.assert_program([self.SCRIPT, '-r', '--format', io_format.value, '-p', TREZOR_PASSWORD,
'-e', entropy_path,
@@ -357,12 +321,12 @@ class TestMain(unittest.TestCase):
stdout_check='[DONE] Recover, entropy in {}, seed in {}.\n'.format(
entropy_path, seed_path),
stderr_check='')
- with open(seed_path, io_format.read_mode) as seed_file:
+ with open(seed_path, io_format.read_mode, encoding=io_format.encoding) as seed_file:
content = seed_file.read()
if io_format is Config.Format.BINARY:
seed = str(hexlify(content), 'ascii')
self.assertEqual(test_vector[2], seed)
- with open(entropy_path, io_format.read_mode) as entropy_file:
+ with open(entropy_path, io_format.read_mode, encoding=io_format.encoding) as entropy_file:
content = entropy_file.read()
if io_format is Config.Format.BINARY:
entropy = str(hexlify(content), 'ascii')
@@ -375,7 +339,7 @@ class TestMain(unittest.TestCase):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(mnemonic)
self.assert_program([self.SCRIPT, '-r',
'-e', entropy_path,
@@ -384,16 +348,16 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=stderr)
- def test_recover_non_unicode_mnemonic(self):
+ def test_recover_invalid_mnemonic_invalid_utf8(self):
with TemporaryDirectory() as tmpdir:
- for mnemonic, stderr in get_non_unicode_mnemonics():
+ for mnemonic, stderr in get_invalid_mnemonics_invalid_utf8():
with self.subTest(mnemonic=mnemonic):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
entropy_path = os.path.join(tmpdir, '__entropy__')
with open(mnemonic_path, 'wb') as mnemonic_file:
mnemonic_file.write(mnemonic)
- self.assert_program([self.SCRIPT, '-r',
+ self.assert_program([self.SCRIPT, '-r', '--format', Config.Format.TEXT_HEXADECIMAL.value,
'-e', entropy_path,
'-m', mnemonic_path,
'-s', seed_path],
@@ -401,25 +365,6 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=stderr)
- def test_recover_non_unicode_password(self):
- mnemonic = TREZOR_TEST_VECTORS['english'][0][1]
- with TemporaryDirectory() as tmpdir:
- for password, stderr in get_invalid_passwords():
- with self.subTest(password=password):
- seed_path = os.path.join(tmpdir, '__seed__')
- mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- entropy_path = os.path.join(tmpdir, '__entropy__')
- with open(mnemonic_path, 'w') as mnemonic_file:
- mnemonic_file.write(mnemonic)
- self.assert_program([self.SCRIPT, '-r',
- '-e', entropy_path,
- '-m', mnemonic_path,
- '-s', seed_path,
- '-p', password],
- ExitCode.EX_DATAERR,
- stdout_check='',
- stderr_check=stderr)
-
def test_verify(self):
with TemporaryDirectory() as tmpdir:
for test_vector in TREZOR_TEST_VECTORS['english']:
@@ -427,9 +372,9 @@ class TestMain(unittest.TestCase):
with self.subTest(mnemonic=test_vector[1], seed=test_vector[2]):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(test_vector[1])
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(
unhexlify(test_vector[2]) if io_format is Config.Format.BINARY else test_vector[2])
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value, '-p', TREZOR_PASSWORD,
@@ -450,9 +395,9 @@ class TestMain(unittest.TestCase):
with self.subTest(mnemonic=mnemonic, seed=seed, io_format=io_format):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(mnemonic)
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(seed)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
@@ -471,9 +416,9 @@ class TestMain(unittest.TestCase):
with self.subTest(mnemonic=mnemonic, seed=seed, io_format=io_format):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(mnemonic)
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(seed)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
@@ -481,20 +426,20 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=stderr)
- def test_verify_non_unicode_mnemonic(self):
+ def test_verify_invalid_mnemonic_invalid_utf8(self):
valid_seeds = [
(unhexlify(VALID_SEED_HEX_TREZOR), Config.Format.BINARY),
(VALID_SEED_HEX_TREZOR, Config.Format.TEXT_HEXADECIMAL),
]
with TemporaryDirectory() as tmpdir:
for seed, io_format in valid_seeds:
- for mnemonic, stderr in get_non_unicode_mnemonics():
+ for mnemonic, stderr in get_invalid_mnemonics_invalid_utf8():
with self.subTest(mnemonic=mnemonic, seed=seed, io_format=io_format):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
with open(mnemonic_path, 'wb') as mnemonic_file:
mnemonic_file.write(mnemonic)
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(seed)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
@@ -508,9 +453,9 @@ class TestMain(unittest.TestCase):
with self.subTest(mnemonic=VALID_MNEMONIC_PHRASE_TREZOR, seed=seed, io_format=io_format):
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(VALID_MNEMONIC_PHRASE_TREZOR)
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(seed)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
@@ -518,15 +463,12 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=stderr)
- @unittest.skip("cannot generate non-UTF-8 seed file")
- def test_verify_non_unicode_seed(self):
- # currently not working: exception occurs after hexlify
- # it should occur just after reading attempt
+ def test_verify_invalid_seed_invalid_utf8(self):
seed = b'\xff' * SEED_LEN
with TemporaryDirectory() as tmpdir:
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(VALID_MNEMONIC_PHRASE_TREZOR)
with open(seed_path, 'wb') as seed_file:
seed_file.write(seed)
@@ -537,34 +479,12 @@ class TestMain(unittest.TestCase):
stdout_check='',
stderr_check=None)
- def test_verify_non_unicode_password(self):
- valid_seeds = [
- (unhexlify(VALID_SEED_HEX_TREZOR), Config.Format.BINARY),
- (VALID_SEED_HEX_TREZOR, Config.Format.TEXT_HEXADECIMAL),
- ]
- with TemporaryDirectory() as tmpdir:
- for seed, io_format in valid_seeds:
- for password, stderr in get_invalid_passwords():
- with self.subTest(seed=seed, io_format=io_format, password=password):
- seed_path = os.path.join(tmpdir, '__seed__')
- mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
- mnemonic_file.write(VALID_MNEMONIC_PHRASE_TREZOR)
- with open(seed_path, io_format.write_mode) as seed_file:
- seed_file.write(seed)
- self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
- '-m', mnemonic_path,
- '-s', seed_path,
- '-p', password], ExitCode.EX_DATAERR,
- stdout_check='',
- stderr_check=stderr)
-
def test_verify_missing_seed_file(self):
with TemporaryDirectory() as tmpdir:
non_existing_filepath = os.path.join(tmpdir, '__this_file_does_not_exist__')
for io_format in Config.Format:
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(mnemonic_path, 'w') as mnemonic_file:
+ with open(mnemonic_path, 'w', encoding='utf-8') as mnemonic_file:
mnemonic_file.write(VALID_MNEMONIC_PHRASE_TREZOR)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
@@ -581,7 +501,7 @@ class TestMain(unittest.TestCase):
for seed, io_format in valid_seeds:
seed_path = os.path.join(tmpdir, '__seed__')
mnemonic_path = os.path.join(tmpdir, '__mnemonic__')
- with open(seed_path, io_format.write_mode) as seed_file:
+ with open(seed_path, io_format.write_mode, encoding=io_format.encoding) as seed_file:
seed_file.write(seed)
self.assert_program([self.SCRIPT, '-v', '--format', io_format.value,
'-m', mnemonic_path,
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
coverage==7.2.7
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1648562407465/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
-e git+https://github.com/mvondracek/PA193_mnemonic_Slytherin.git@a23ee44909ca5f930478e0d7ead02b4bae6a0715#egg=pa193mnemonicslytherin
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pytest==7.1.2
pytest-cov==4.1.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
zipp @ file:///croot/zipp_1672387121353/work
| name: PA193_mnemonic_Slytherin
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib-metadata=4.11.3=py37h06a4308_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.2.7
- pytest-cov==4.1.0
prefix: /opt/conda/envs/PA193_mnemonic_Slytherin
| [
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_generate",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_generate_invalid_entropy",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_recover",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_invalid_mnemonic",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_invalid_mnemonic_invalid_utf8",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_invalid_seed",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_missing_mnemonic_file",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_seeds_do_not_match"
] | [] | [
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_dictionary_lines",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_dictionary_long_word",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_dictionary_name",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_dictionary_name_unsupported",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_dictionary_words_on_line",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_mnemonic",
"pa193mnemonicslytherin/test_mnemonic.py::TestInternalTestHelpers::test_extract_checksum_invalid_mnemonic_invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_generate",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_generate_invalid_arguments",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_generate_invalid_password_invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_generate_invalid_password_too_long",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_recover",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_recover_invalid_arguments",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_recover_invalid_password_invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_recover_invalid_password_too_long",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_verify",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_verify_invalid_arguments",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_verify_invalid_password_invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestPublicFunctions::test_verify_invalid_password_too_long",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test___init__",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test___init___invalid_argument",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test___init___invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test___init___too_long_str",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_entropy",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_entropy_deep_copy",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_seed",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_seed_invalid_password",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_seed_invalid_password_invalid_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::TestMnemonic::test_to_seed_invalid_password_too_long",
"pa193mnemonicslytherin/test_mnemonic.py::TestSeed::test___eq__",
"pa193mnemonicslytherin/test_mnemonic.py::TestSeed::test___init__",
"pa193mnemonicslytherin/test_mnemonic.py::TestSeed::test___init___invalid_argument",
"pa193mnemonicslytherin/test_mnemonic.py::TestSeed::test__ne__",
"pa193mnemonicslytherin/test_mnemonic.py::TestEntropy::test___init__",
"pa193mnemonicslytherin/test_mnemonic.py::TestEntropy::test___init___invalid_argument",
"pa193mnemonicslytherin/test_mnemonic.py::TestEntropy::test_checksum",
"pa193mnemonicslytherin/test_mnemonic.py::TestEntropy::test_to_mnemonic",
"pa193mnemonicslytherin/test_mnemonic.py::TestEntropy::test_to_mnemonic_deep_copy",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_lines",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_long_word",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_name",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_name_unsupported",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_non_utf8",
"pa193mnemonicslytherin/test_mnemonic.py::Test_DictionaryAccess::test___init___invalid_dictionary_words_on_line",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_arguments_EX_NOINPUT",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_arguments_error",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_arguments_ok_terminated",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_generate_invalid_entropy_invalid_utf8",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_recover_invalid_mnemonic",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_recover_invalid_mnemonic_invalid_utf8",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_invalid_seed_invalid_utf8",
"pa193mnemonicslytherin/test_mnemoniccli.py::TestMain::test_verify_missing_seed_file"
] | [] | null | 5,708 | 1,718 | [
"pa193mnemonicslytherin/mnemoniccli.py"
] |
|
iterative__dvc-2705 | d932405ee148767c5dbbbc394d6cd414270bf8f0 | 2019-10-31 16:58:58 | 143cbdc828bbb90e764d657e8506c17564708a20 | diff --git a/dvc/output/local.py b/dvc/output/local.py
index 777ac3756..cbd9c9503 100644
--- a/dvc/output/local.py
+++ b/dvc/output/local.py
@@ -26,7 +26,7 @@ class OutputLOCAL(OutputBase):
# so we should expect both posix and windows style paths.
# PathInfo accepts both, i.e. / works everywhere, \ only on win.
#
- # FIXME: if we have Windows path containig / or posix one with \
+ # FIXME: if we have Windows path containing / or posix one with \
# then we have #2059 bug and can't really handle that.
p = self.REMOTE.path_cls(path)
if not p.is_absolute():
diff --git a/dvc/repo/get_url.py b/dvc/repo/get_url.py
index ff191e149..8ba187865 100644
--- a/dvc/repo/get_url.py
+++ b/dvc/repo/get_url.py
@@ -12,7 +12,8 @@ def get_url(url, out=None):
if os.path.exists(url):
url = os.path.abspath(url)
- out = os.path.abspath(out)
+
+ out = os.path.abspath(out)
dep, = dependency.loads_from(None, [url])
out, = output.loads_from(None, [out], use_cache=False)
| Problems when downloading file from s3
DVC version 0.66.1, pip, windows
```
C:\Users\Павел\PycharmProjects\musket_core>dvc get-url s3://mybucket/data.txt ./data.txt
ERROR: unexpected error - 'NoneType' object has no attribute 'wdir'
```
Regards,
Pavel | iterative/dvc | diff --git a/tests/func/test_get_url.py b/tests/func/test_get_url.py
index eae2b556e..fab7a4db6 100644
--- a/tests/func/test_get_url.py
+++ b/tests/func/test_get_url.py
@@ -1,13 +1,19 @@
from __future__ import unicode_literals
import os
+import boto3
import filecmp
import pytest
+from moto import mock_s3
+
+from dvc.remote import RemoteS3
from dvc.repo import Repo
from dvc.utils import makedirs
+from tests.func.test_data_cloud import get_aws_url
+
def test_get_file(repo_dir):
src = repo_dir.FOO
@@ -32,3 +38,27 @@ def test_get_url_to_dir(dname, repo_dir):
assert os.path.isdir(dname)
assert filecmp.cmp(repo_dir.DATA, dst, shallow=False)
+
+
+@mock_s3
[email protected]("dst", [".", "./from"])
+def test_get_url_from_non_local_path_to_dir_and_file(repo_dir, dst):
+ file_name = "from"
+ file_content = "data"
+ base_info = RemoteS3.path_cls(get_aws_url())
+ from_info = base_info / file_name
+
+ s3 = boto3.client("s3")
+ s3.create_bucket(Bucket=from_info.bucket)
+ s3.put_object(
+ Bucket=from_info.bucket, Key=from_info.path, Body=file_content
+ )
+
+ Repo.get_url(from_info.url, dst)
+
+ result_path = os.path.join(dst, file_name) if os.path.isdir(dst) else dst
+
+ assert os.path.exists(result_path)
+ assert os.path.isfile(result_path)
+ with open(result_path, "r") as fd:
+ assert fd.read() == file_content
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 2
},
"num_modified_files": 2
} | 0.66 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aliyun-python-sdk-core==2.16.0
aliyun-python-sdk-core-v3==2.13.33
aliyun-python-sdk-kms==2.16.5
appdirs==1.4.4
asciimatics==1.14.0
atpublic==3.1.2
attrs @ file:///croot/attrs_1668696182826/work
autocommand==2.2.2
awscli==1.31.13
azure-common==1.1.28
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
bcrypt==4.2.1
boto==2.49.0
boto3==1.9.115
botocore==1.12.253
cachetools==4.2.4
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
collective.checkdocs==0.2
colorama==0.4.4
configobj==5.0.9
configparser==5.3.0
coverage==7.2.7
crcmod==1.7
cryptography==44.0.2
decorator==5.1.1
distro==1.9.0
docutils==0.15.2
-e git+https://github.com/iterative/dvc.git@d932405ee148767c5dbbbc394d6cd414270bf8f0#egg=dvc
execnet==2.0.2
flake8==5.0.4
flake8-docstrings==1.7.0
flaky==3.8.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
flufl.lock==7.1.1
funcy==2.0
future==1.0.0
gitdb==4.0.12
gitdb2==4.0.2
GitPython==3.1.44
google-api-core==2.10.2
google-auth==1.35.0
google-cloud-core==1.7.3
google-cloud-storage==1.19.0
google-compute-engine==2.8.13
google-crc32c==1.5.0
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grandalf==0.6
humanize==4.6.0
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
inflect==6.0.5
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.2.3
jaraco.collections==4.2.0
jaraco.context==4.3.0
jaraco.functools==3.7.0
jaraco.structures==2.1.0
jaraco.text==3.11.1
jaraco.ui==2.3.0
jaraco.windows==5.7.0
Jinja2==3.1.6
jmespath==0.10.0
jsonpath-ng==1.7.0
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mock-ssh-server==0.9.1
more-itertools==9.1.0
moto==4.2.14
nanotime==0.5.2
networkx==2.3
numpy==1.21.6
oss2==2.6.1
packaging @ file:///croot/packaging_1671697413597/work
paramiko==3.5.1
path==16.6.0
pathspec==0.11.2
Pillow==9.5.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
ply==3.11
protobuf==4.24.4
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyarrow==0.14.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycodestyle==2.9.1
pycparser==2.21
pycryptodome==3.22.0
pydantic==1.10.21
pydocstyle==6.3.0
pyfiglet==0.8.post1
pyflakes==2.5.0
Pygments==2.17.2
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.1.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==6.0.1
rangehttpserver==1.4.0
requests==2.31.0
responses==0.23.3
rsa==4.7.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
s3transfer==0.2.1
schema==0.7.7
shortuuid==1.0.13
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
tqdm==4.67.1
treelib==1.7.1
types-PyYAML==6.0.12.12
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3==1.25.11
wcwidth==0.2.13
Werkzeug==2.2.3
xmltodict==0.14.2
zipp @ file:///croot/zipp_1672387121353/work
| name: dvc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- aliyun-python-sdk-core==2.16.0
- aliyun-python-sdk-core-v3==2.13.33
- aliyun-python-sdk-kms==2.16.5
- appdirs==1.4.4
- asciimatics==1.14.0
- atpublic==3.1.2
- autocommand==2.2.2
- awscli==1.31.13
- azure-common==1.1.28
- azure-storage-blob==2.1.0
- azure-storage-common==2.1.0
- bcrypt==4.2.1
- boto==2.49.0
- boto3==1.9.115
- botocore==1.12.253
- cachetools==4.2.4
- cffi==1.15.1
- charset-normalizer==3.4.1
- collective-checkdocs==0.2
- colorama==0.4.4
- configobj==5.0.9
- configparser==5.3.0
- coverage==7.2.7
- crcmod==1.7
- cryptography==44.0.2
- decorator==5.1.1
- distro==1.9.0
- docutils==0.15.2
- dvc==0.66.2+d93240
- execnet==2.0.2
- flake8==5.0.4
- flake8-docstrings==1.7.0
- flaky==3.8.1
- flufl-lock==7.1.1
- funcy==2.0
- future==1.0.0
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==3.1.44
- google-api-core==2.10.2
- google-auth==1.35.0
- google-cloud-core==1.7.3
- google-cloud-storage==1.19.0
- google-compute-engine==2.8.13
- google-crc32c==1.5.0
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grandalf==0.6
- humanize==4.6.0
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- inflect==6.0.5
- jaraco-classes==3.2.3
- jaraco-collections==4.2.0
- jaraco-context==4.3.0
- jaraco-functools==3.7.0
- jaraco-structures==2.1.0
- jaraco-text==3.11.1
- jaraco-ui==2.3.0
- jaraco-windows==5.7.0
- jinja2==3.1.6
- jmespath==0.10.0
- jsonpath-ng==1.7.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mock-ssh-server==0.9.1
- more-itertools==9.1.0
- moto==4.2.14
- nanotime==0.5.2
- networkx==2.3
- numpy==1.21.6
- oss2==2.6.1
- paramiko==3.5.1
- path==16.6.0
- pathspec==0.11.2
- pillow==9.5.0
- ply==3.11
- protobuf==4.24.4
- psutil==7.0.0
- pyarrow==0.14.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycodestyle==2.9.1
- pycparser==2.21
- pycryptodome==3.22.0
- pydantic==1.10.21
- pydocstyle==6.3.0
- pyfiglet==0.8.post1
- pyflakes==2.5.0
- pygments==2.17.2
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.1
- rangehttpserver==1.4.0
- requests==2.31.0
- responses==0.23.3
- rsa==4.7.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- s3transfer==0.2.1
- schema==0.7.7
- shortuuid==1.0.13
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- tqdm==4.67.1
- treelib==1.7.1
- types-pyyaml==6.0.12.12
- urllib3==1.25.11
- wcwidth==0.2.13
- werkzeug==2.2.3
- xmltodict==0.14.2
prefix: /opt/conda/envs/dvc
| [
"tests/func/test_get_url.py::test_get_url_from_non_local_path_to_dir_and_file[.]",
"tests/func/test_get_url.py::test_get_url_from_non_local_path_to_dir_and_file[./from]"
] | [] | [
"tests/func/test_get_url.py::test_get_file",
"tests/func/test_get_url.py::test_get_url_to_dir[.]",
"tests/func/test_get_url.py::test_get_url_to_dir[dir]",
"tests/func/test_get_url.py::test_get_url_to_dir[dir/subdir]"
] | [] | Apache License 2.0 | 5,710 | 330 | [
"dvc/output/local.py",
"dvc/repo/get_url.py"
] |
|
dwavesystems__dwave-cloud-client-338 | a983749f8ce208eb45c4f121bfff6a2b2e31ee63 | 2019-10-31 17:09:44 | 0835a006e6a12bea18ff470d068ab0203be1d719 | diff --git a/dwave/cloud/cli.py b/dwave/cloud/cli.py
index f11bdb2..af6ecc7 100644
--- a/dwave/cloud/cli.py
+++ b/dwave/cloud/cli.py
@@ -234,11 +234,18 @@ def _ping(config_file, profile, solver_def, request_timeout, polling_timeout, ou
except Exception as e:
raise CLIError("Unexpected error while fetching solver: {!r}".format(e), 5)
+ if hasattr(solver, 'nodes'):
+ # structured solver: use the first existing node
+ problem = ({min(solver.nodes): 0}, {})
+ else:
+ # unstructured solver doesn't constrain problem graph
+ problem = ({0: 1}, {})
+
t1 = timer()
output("Using solver: {solver_id}", solver_id=solver.id)
try:
- future = solver.sample_ising({0: 1}, {})
+ future = solver.sample_ising(*problem)
timing = future.timing
except RequestTimeout:
raise CLIError("API connection timed out.", 8)
diff --git a/dwave/cloud/utils.py b/dwave/cloud/utils.py
index 0608f67..802684a 100644
--- a/dwave/cloud/utils.py
+++ b/dwave/cloud/utils.py
@@ -486,10 +486,13 @@ class tictoc(object):
self.dt = perf_counter() - self.tick
-def parse_loglevel(level_name):
+def parse_loglevel(level_name, default=logging.NOTSET):
"""Resolve numeric and symbolic log level names to numeric levels."""
- level_name = (level_name or '').strip().lower()
+ try:
+ level_name = str(level_name or '').strip().lower()
+ except:
+ return default
# note: make sure `TRACE` level is added to `logging` before calling this
known_levels = {
@@ -507,7 +510,7 @@ def parse_loglevel(level_name):
try:
level = int(level_name)
except ValueError:
- level = known_levels.get(level_name, logging.NOTSET)
+ level = known_levels.get(level_name, default)
return level
| Ping should respect solver's graph
Currently, a fixed Ising problem `({0: 1}, {})` is always submitted, obviously ignoring the actual solver's graph.
Note that `sample --random-problem` respects the graph structure. | dwavesystems/dwave-cloud-client | diff --git a/tests/test_cli.py b/tests/test_cli.py
index ec342e5..0e9c136 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -180,6 +180,9 @@ class TestCli(unittest.TestCase):
profile = 'profile'
with mock.patch('dwave.cloud.cli.Client') as m:
+ # mock returned solver
+ client = m.from_config.return_value
+ client.get_solver.return_value.nodes = [5, 7, 3]
runner = CliRunner()
with runner.isolated_filesystem():
@@ -196,12 +199,11 @@ class TestCli(unittest.TestCase):
request_timeout=0.5, polling_timeout=30)
# get solver called?
- c = m.from_config.return_value
- c.get_solver.assert_called_with()
+ client.get_solver.assert_called_with()
# sampling method called on solver?
- s = c.get_solver.return_value
- s.sample_ising.assert_called_with({0: 1}, {})
+ solver = client.get_solver.return_value
+ solver.sample_ising.assert_called_with({3: 0}, {})
self.assertEqual(result.exit_code, 0)
diff --git a/tests/test_mock_solver_loading.py b/tests/test_mock_solver_loading.py
index 158462b..953c7cd 100644
--- a/tests/test_mock_solver_loading.py
+++ b/tests/test_mock_solver_loading.py
@@ -47,8 +47,8 @@ def structured_solver_data(id_, incomplete=False):
obj = {
"properties": {
"supported_problem_types": ["qubo", "ising"],
- "qubits": [0, 1, 2],
- "couplers": [[0, 1], [0, 2], [1, 2]],
+ "qubits": [1, 2, 3],
+ "couplers": [[1, 2], [1, 3], [2, 3]],
"num_qubits": 3,
"parameters": {"num_reads": "Number of samples to return."}
},
diff --git a/tests/test_utils.py b/tests/test_utils.py
index a3cecf5..8a2f7d1 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging
import unittest
from collections import OrderedDict
from itertools import count
@@ -20,7 +21,7 @@ from datetime import datetime
from dwave.cloud.utils import (
uniform_iterator, uniform_get, strip_head, strip_tail,
active_qubits, generate_random_ising_problem,
- default_text_input, utcnow, cached, retried)
+ default_text_input, utcnow, cached, retried, parse_loglevel)
from dwave.cloud.testing import mock
@@ -118,6 +119,25 @@ class TestSimpleUtils(unittest.TestCase):
unaware = t.replace(tzinfo=None)
self.assertLess((now - unaware).total_seconds(), 1.0)
+ def test_parse_loglevel_invalid(self):
+ """Parsing invalid log levels returns NOTSET."""
+ notset = logging.NOTSET
+
+ self.assertEqual(parse_loglevel(''), notset)
+ self.assertEqual(parse_loglevel(' '), notset)
+ self.assertEqual(parse_loglevel(None), notset)
+ self.assertEqual(parse_loglevel(notset), notset)
+ self.assertEqual(parse_loglevel('nonexisting'), notset)
+ self.assertEqual(parse_loglevel({'a': 1}), notset)
+ self.assertIsNone(parse_loglevel('nonexisting', default=None))
+
+ def test_parse_loglevel_numeric_and_symbolic(self):
+ self.assertEqual(parse_loglevel('info'), logging.INFO)
+ self.assertEqual(parse_loglevel('INFO'), logging.INFO)
+ self.assertEqual(parse_loglevel(logging.INFO), logging.INFO)
+ self.assertEqual(parse_loglevel(str(logging.INFO)), logging.INFO)
+ self.assertEqual(parse_loglevel(' %d ' % logging.INFO), logging.INFO)
+
class TestCachedDecorator(unittest.TestCase):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 2
} | 0.6 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[bqm]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"mock",
"requests_mock",
"coverage",
"coveralls",
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
charset-normalizer==2.0.12
click==8.0.4
coverage==6.2
coveralls==3.3.1
dimod==0.10.10
docopt==0.6.2
-e git+https://github.com/dwavesystems/dwave-cloud-client.git@a983749f8ce208eb45c4f121bfff6a2b2e31ee63#egg=dwave_cloud_client
homebase==1.0.1
idna==3.10
importlib-metadata==4.8.3
iniconfig==1.1.1
mock==5.2.0
numpy==1.19.5
packaging==21.3
plucky==0.4.3
pluggy==1.0.0
py==1.11.0
pyparsing==2.4.7
PySocks==1.7.1
pytest==7.0.1
python-dateutil==2.9.0.post0
requests==2.27.1
requests-mock==1.12.1
six==1.17.0
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: dwave-cloud-client
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- charset-normalizer==2.0.12
- click==8.0.4
- coverage==6.2
- coveralls==3.3.1
- dimod==0.10.10
- docopt==0.6.2
- homebase==1.0.1
- idna==3.10
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- mock==5.2.0
- numpy==1.19.5
- packaging==21.3
- plucky==0.4.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==2.4.7
- pysocks==1.7.1
- pytest==7.0.1
- python-dateutil==2.9.0.post0
- requests==2.27.1
- requests-mock==1.12.1
- six==1.17.0
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/dwave-cloud-client
| [
"tests/test_cli.py::TestCli::test_ping",
"tests/test_utils.py::TestSimpleUtils::test_parse_loglevel_invalid",
"tests/test_utils.py::TestSimpleUtils::test_parse_loglevel_numeric_and_symbolic"
] | [] | [
"tests/test_cli.py::TestCli::test_config_create",
"tests/test_cli.py::TestCli::test_config_ls",
"tests/test_cli.py::TestCli::test_configure_inspect",
"tests/test_cli.py::TestCli::test_sample",
"tests/test_cli.py::TestCli::test_solvers",
"tests/test_cli.py::TestCli::test_upload",
"tests/test_mock_solver_loading.py::MockConnectivityTests::test_bad_token",
"tests/test_mock_solver_loading.py::MockConnectivityTests::test_bad_url",
"tests/test_mock_solver_loading.py::MockConnectivityTests::test_good_connection",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_load_all_solvers",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_load_missing_solver",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_load_solver",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_load_solver_broken_response",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_load_solver_missing_data",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_solver_feature_properties",
"tests/test_mock_solver_loading.py::MockSolverLoading::test_solver_filtering_in_client",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_env_args_set",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_env_with_file_set",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_explicit_only",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_explicit_with_file",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_file_read_error",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_nonexisting_file",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_only_file",
"tests/test_mock_solver_loading.py::MockLegacyConfiguration::test_only_file_key",
"tests/test_mock_solver_loading.py::MockConfiguration::test_custom_options",
"tests/test_utils.py::TestSimpleUtils::test_active_qubits_dict",
"tests/test_utils.py::TestSimpleUtils::test_active_qubits_list",
"tests/test_utils.py::TestSimpleUtils::test_default_text_input",
"tests/test_utils.py::TestSimpleUtils::test_generate_random_ising_problem",
"tests/test_utils.py::TestSimpleUtils::test_generate_random_ising_problem_default_solver_ranges",
"tests/test_utils.py::TestSimpleUtils::test_generate_random_ising_problem_with_user_constrained_ranges",
"tests/test_utils.py::TestSimpleUtils::test_strip_head",
"tests/test_utils.py::TestSimpleUtils::test_strip_tail",
"tests/test_utils.py::TestSimpleUtils::test_uniform_get",
"tests/test_utils.py::TestSimpleUtils::test_uniform_iterator",
"tests/test_utils.py::TestSimpleUtils::test_utcnow",
"tests/test_utils.py::TestCachedDecorator::test_args_collision",
"tests/test_utils.py::TestCachedDecorator::test_args_hashing",
"tests/test_utils.py::TestCachedDecorator::test_default_maxage",
"tests/test_utils.py::TestCachedDecorator::test_exceptions",
"tests/test_utils.py::TestCachedDecorator::test_expiry",
"tests/test_utils.py::TestRetriedDecorator::test_backoff_constant",
"tests/test_utils.py::TestRetriedDecorator::test_backoff_func",
"tests/test_utils.py::TestRetriedDecorator::test_backoff_seq",
"tests/test_utils.py::TestRetriedDecorator::test_decorator",
"tests/test_utils.py::TestRetriedDecorator::test_exc_raised",
"tests/test_utils.py::TestRetriedDecorator::test_func_called",
"tests/test_utils.py::TestRetriedDecorator::test_func_called_only_until_succeeds"
] | [] | Apache License 2.0 | 5,712 | 510 | [
"dwave/cloud/cli.py",
"dwave/cloud/utils.py"
] |
|
pydicom__pydicom-965 | ee775c8a137cd8e0b69b46dc24c23648c31fe34c | 2019-11-01 14:43:06 | 7241f5d9db0de589b230bb84212fbb643a7c86c3 | pep8speaks: Hello @mrbean-bremen! Thanks for opening this PR. We checked the lines you've touched for [PEP 8](https://www.python.org/dev/peps/pep-0008) issues, and found:
* In the file [`pydicom/config.py`](https://github.com/pydicom/pydicom/blob/1100467c86fa0911ecd4c9e0657bdb6cb74f632c/pydicom/config.py):
> [Line 91:74](https://github.com/pydicom/pydicom/blob/1100467c86fa0911ecd4c9e0657bdb6cb74f632c/pydicom/config.py#L91): [W291](https://duckduckgo.com/?q=pep8%20W291) trailing whitespace
> [Line 92:71](https://github.com/pydicom/pydicom/blob/1100467c86fa0911ecd4c9e0657bdb6cb74f632c/pydicom/config.py#L92): [W291](https://duckduckgo.com/?q=pep8%20W291) trailing whitespace
darcymason: > which had been discussed in #896, though with no clear conclusion, as far as I can see
Just had a quick read through that again... I thought there was more discussion about sequences (perhaps some other issue/PR?). I personally like the empty list (and the `is_empty` concept that was discussed, which can always work), but don't have time today to properly look this through and answer. Will look further over the weekend. | diff --git a/pydicom/config.py b/pydicom/config.py
index 00f63d5d8..421799645 100644
--- a/pydicom/config.py
+++ b/pydicom/config.py
@@ -87,9 +87,10 @@ Default ``False``
"""
use_none_as_empty_text_VR_value = False
-""" If ``True``, the value of decoded empty data element is always ``None``.
-If ``False`` (the default), the value of an empty data element with
-a text VR is an empty string, for all other VRs it is also ``None``.
+""" If ``True``, the value of a decoded empty data element with
+a text VR is ``None``, otherwise (the default), it is is an empty string.
+For all other VRs the behavior does not change - the value is en empty
+list for VR 'SQ' and ``None`` for all other VRs.
Note that the default of this value will change to ``True`` in version 2.0.
"""
diff --git a/pydicom/dataelem.py b/pydicom/dataelem.py
index 8dfe3e2f2..5e090ea06 100644
--- a/pydicom/dataelem.py
+++ b/pydicom/dataelem.py
@@ -48,10 +48,12 @@ def empty_value_for_VR(VR, raw=False):
The behavior of this property depends on the setting of
:attr:`config.use_none_as_empty_value`. If that is set to ``True``,
- an empty value is always represented by ``None``, otherwise it depends
- on `VR`. For text VRs (this includes 'AE', 'AS', 'CS', 'DA', 'DT', 'LO',
- 'LT', 'PN', 'SH', 'ST', 'TM', 'UC', 'UI', 'UR' and 'UT') an empty string
- is used as empty value representation, for all other VRs, ``None``.
+ an empty value is represented by ``None`` (except for VR 'SQ'), otherwise
+ it depends on `VR`. For text VRs (this includes 'AE', 'AS', 'CS', 'DA',
+ 'DT', 'LO', 'LT', 'PN', 'SH', 'ST', 'TM', 'UC', 'UI', 'UR' and 'UT') an
+ empty string is used as empty value representation, for all other VRs
+ except 'SQ', ``None``. For empty sequence values (VR 'SQ') an empty list
+ is used in all cases.
Note that this is used only if decoding the element - it is always
possible to set the value to another empty value representation,
which will be preserved during the element object lifetime.
@@ -67,10 +69,12 @@ def empty_value_for_VR(VR, raw=False):
Returns
-------
- str or bytes or None
+ str or bytes or None or list
The value a data element with `VR` is assigned on decoding
if it is empty.
"""
+ if VR == 'SQ':
+ return []
if config.use_none_as_empty_text_VR_value:
return None
if VR in ('AE', 'AS', 'CS', 'DA', 'DT', 'LO', 'LT',
| Empty data elements with value representation SQ are set to None
**Describe the bug**
In the current `master`, empty data elements are not read correctly from files. The attribute value is set to `None` instead of `[]`.
**Expected behavior**
Create empty list `[]` for empty sequence, i.e., a sequence with zero items.
**Steps To Reproduce**
```python
import pydicom
ds = pydicom.Dataset()
ds.AcquisitionContextSequence = []
print(ds)
ds.is_little_endian = True
ds.is_implicit_VR = True
ds.save_as('/tmp/test.dcm')
reloaded_ds = pydicom.dcmread('/tmp/test.dcm', force=True)
print(reloaded_ds)
```
This prints:
```
(0040, 0555) Acquisition Context Sequence 0 item(s) ----
...
TypeError: With tag (0040, 0555) got exception: object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/private/tmp/pydicom/pydicom/tag.py", line 30, in tag_in_exception
yield
File "/private/tmp/pydicom/pydicom/dataset.py", line 1599, in _pretty_str
len(data_element.value)))
TypeError: object of type 'NoneType' has no len()
```
**Your environment**
```
Darwin-18.6.0-x86_64-i386-64bit
Python 3.7.3 (default, Mar 27 2019, 09:23:15)
[Clang 10.0.1 (clang-1001.0.46.3)]
pydicom 1.4.0.dev0
``` | pydicom/pydicom | diff --git a/pydicom/tests/test_dataelem.py b/pydicom/tests/test_dataelem.py
index 5ff1611f0..f03e7f47d 100644
--- a/pydicom/tests/test_dataelem.py
+++ b/pydicom/tests/test_dataelem.py
@@ -503,6 +503,23 @@ class TestDataElement(object):
check_empty_binary_element(MultiValue(int, []))
check_empty_binary_element(None)
+ def test_empty_sequence_is_handled_as_array(self):
+ ds = Dataset()
+ ds.AcquisitionContextSequence = []
+ elem = ds['AcquisitionContextSequence']
+ assert bool(elem.value) is False
+ assert 0 == elem.VM
+ assert elem.value == []
+
+ fp = DicomBytesIO()
+ fp.is_little_endian = True
+ fp.is_implicit_VR = True
+ filewriter.write_dataset(fp, ds)
+ ds_read = dcmread(fp, force=True)
+ elem = ds_read['AcquisitionContextSequence']
+ assert 0 == elem.VM
+ assert elem.value == []
+
class TestRawDataElement(object):
"""Tests for dataelem.RawDataElement."""
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 2
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
coverage==6.2
importlib-metadata==4.8.3
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
-e git+https://github.com/pydicom/pydicom.git@ee775c8a137cd8e0b69b46dc24c23648c31fe34c#egg=pydicom
pyparsing==3.1.4
pytest==7.0.1
pytest-cov==4.0.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: pydicom
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- coverage==6.2
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-cov==4.0.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/pydicom
| [
"pydicom/tests/test_dataelem.py::TestDataElement::test_empty_sequence_is_handled_as_array"
] | [] | [
"pydicom/tests/test_dataelem.py::TestDataElement::test_VM_1",
"pydicom/tests/test_dataelem.py::TestDataElement::test_VM_2",
"pydicom/tests/test_dataelem.py::TestDataElement::test_DSFloat_conversion",
"pydicom/tests/test_dataelem.py::TestDataElement::test_backslash",
"pydicom/tests/test_dataelem.py::TestDataElement::test_UID",
"pydicom/tests/test_dataelem.py::TestDataElement::test_keyword",
"pydicom/tests/test_dataelem.py::TestDataElement::test_retired",
"pydicom/tests/test_dataelem.py::TestDataElement::test_description_group_length",
"pydicom/tests/test_dataelem.py::TestDataElement::test_description_unknown_private",
"pydicom/tests/test_dataelem.py::TestDataElement::test_description_unknown",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_standard_element",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_private_element",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_sequence_element",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_not_rlement",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_inheritance",
"pydicom/tests/test_dataelem.py::TestDataElement::test_equality_class_members",
"pydicom/tests/test_dataelem.py::TestDataElement::test_inequality_standard",
"pydicom/tests/test_dataelem.py::TestDataElement::test_inequality_sequence",
"pydicom/tests/test_dataelem.py::TestDataElement::test_hash",
"pydicom/tests/test_dataelem.py::TestDataElement::test_repeater_str",
"pydicom/tests/test_dataelem.py::TestDataElement::test_str_no_vr",
"pydicom/tests/test_dataelem.py::TestDataElement::test_repr_seq",
"pydicom/tests/test_dataelem.py::TestDataElement::test_getitem_raises",
"pydicom/tests/test_dataelem.py::TestDataElement::test_repval_large_elem",
"pydicom/tests/test_dataelem.py::TestDataElement::test_repval_large_vm",
"pydicom/tests/test_dataelem.py::TestDataElement::test_repval_strange_type",
"pydicom/tests/test_dataelem.py::TestDataElement::test_private_tag_in_repeater_range",
"pydicom/tests/test_dataelem.py::TestDataElement::test_private_repeater_tag",
"pydicom/tests/test_dataelem.py::TestDataElement::test_known_tags_with_UN_VR",
"pydicom/tests/test_dataelem.py::TestDataElement::test_unknown_tags_with_UN_VR",
"pydicom/tests/test_dataelem.py::TestDataElement::test_tag_with_long_value_UN_VR",
"pydicom/tests/test_dataelem.py::TestDataElement::test_empty_text_values[True-None]",
"pydicom/tests/test_dataelem.py::TestDataElement::test_empty_text_values[False-]",
"pydicom/tests/test_dataelem.py::TestDataElement::test_empty_binary_values",
"pydicom/tests/test_dataelem.py::TestRawDataElement::test_key_error",
"pydicom/tests/test_dataelem.py::TestRawDataElement::test_valid_tag",
"pydicom/tests/test_dataelem.py::TestRawDataElement::test_data_element_without_encoding",
"pydicom/tests/test_dataelem.py::TestRawDataElement::test_unknown_vr"
] | [] | MIT License | 5,716 | 760 | [
"pydicom/config.py",
"pydicom/dataelem.py"
] |
encode__starlette-701 | a92df1f61ede3b63ee9dcb315ce363c14242597b | 2019-11-01 15:15:31 | d62b22e9fd3c0be5ffc48a7e5fb90aafbde4589b | diff --git a/starlette/routing.py b/starlette/routing.py
index 7a0d7ef..a1e4619 100644
--- a/starlette/routing.py
+++ b/starlette/routing.py
@@ -336,15 +336,18 @@ class Mount(BaseRoute):
else:
# 'name' matches "<mount_name>:<child_name>".
remaining_name = name[len(self.name) + 1 :]
+ path_kwarg = path_params.get("path")
path_params["path"] = ""
- path, remaining_params = replace_params(
+ path_prefix, remaining_params = replace_params(
self.path_format, self.param_convertors, path_params
)
+ if path_kwarg is not None:
+ remaining_params["path"] = path_kwarg
for route in self.routes or []:
try:
url = route.url_path_for(remaining_name, **remaining_params)
return URLPath(
- path=path.rstrip("/") + str(url), protocol=url.protocol
+ path=path_prefix.rstrip("/") + str(url), protocol=url.protocol
)
except NoMatchFound:
pass
| Mounts without a name strips the path arg from calls to child routes in url_path_for
I'm using an unnamed Mount which contains a mount to a StaticFiles instance and when using "url_for" it is unable to find the matching file. After a bit of digging it seems to be directly related to url_for_path in Mount clearing the path arg whenever the mount does not have a name set.
https://github.com/encode/starlette/blob/master/starlette/routing.py#L337
Calling url_path_for in the child router works as expected. I'm unclear as to why you'd want to strip this arg just because the mount is not named.
```
app.mount('/stuff', app=Router(routes=[
Mount('/static', app=StaticFiles(directory='static'), name='static'),
]))
```
```
In [6]: app.url_path_for('static', path='css/bootstap.min.css')
---------------------------------------------------------------------------
NoMatchFound
...
In [7]: app.routes[0].routes[0].url_path_for('static', path='css/bootstap.min.css')
Out[7]: '/static/css/bootstap.min.css'
``` | encode/starlette | diff --git a/tests/test_routing.py b/tests/test_routing.py
index 9723e46..30645ef 100644
--- a/tests/test_routing.py
+++ b/tests/test_routing.py
@@ -400,3 +400,14 @@ def test_url_for_with_root_path():
"index": "https://www.example.org/sub_path/",
"submount": "https://www.example.org/sub_path/submount/",
}
+
+
+double_mount_routes = [
+ Mount("/mount", name="mount", routes=[Mount("/static", ..., name="static")],),
+]
+
+
+def test_url_for_with_double_mount():
+ app = Starlette(routes=double_mount_routes)
+ url = app.url_path_for("mount:static", path="123")
+ assert url == "/mount/static/123"
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[full]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"autoflake",
"black",
"isort",
"mypy",
"databases[sqlite]"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiofiles==23.2.1
aiosqlite==0.19.0
autoflake==2.1.1
Babel==2.14.0
black==23.3.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
charset-normalizer==3.4.1
click==8.1.8
colorama==0.4.6
coverage==7.2.7
databases==0.8.0
exceptiongroup==1.2.2
ghp-import==2.1.0
graphene==3.4.3
graphql-core==3.2.6
graphql-relay==3.2.0
greenlet==3.1.1
idna==3.10
importlib-metadata==6.7.0
iniconfig==2.0.0
isort==5.11.5
itsdangerous==2.1.2
Jinja2==3.1.6
Markdown==3.4.4
MarkupSafe==2.1.5
mergedeep==1.3.4
mkdocs==1.5.3
mkdocs-material==9.2.7
mkdocs-material-extensions==1.2
mypy==1.4.1
mypy-extensions==1.0.0
packaging==24.0
paginate==0.5.7
pathspec==0.11.2
platformdirs==4.0.0
pluggy==1.2.0
pyflakes==3.0.1
Pygments==2.17.2
pymdown-extensions==10.2.1
pytest==7.4.4
pytest-cov==4.1.0
python-dateutil==2.9.0.post0
python-multipart==0.0.8
pytz==2025.2
PyYAML==6.0.1
pyyaml_env_tag==0.1
regex==2022.10.31
requests==2.31.0
six==1.17.0
SQLAlchemy==1.4.54
-e git+https://github.com/encode/starlette.git@a92df1f61ede3b63ee9dcb315ce363c14242597b#egg=starlette
tomli==2.0.1
typed-ast==1.5.5
typing_extensions==4.7.1
ujson==5.7.0
urllib3==2.0.7
watchdog==3.0.0
zipp==3.15.0
| name: starlette
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiofiles==23.2.1
- aiosqlite==0.19.0
- autoflake==2.1.1
- babel==2.14.0
- black==23.3.0
- charset-normalizer==3.4.1
- click==8.1.8
- colorama==0.4.6
- coverage==7.2.7
- databases==0.8.0
- exceptiongroup==1.2.2
- ghp-import==2.1.0
- graphene==3.4.3
- graphql-core==3.2.6
- graphql-relay==3.2.0
- greenlet==3.1.1
- idna==3.10
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- isort==5.11.5
- itsdangerous==2.1.2
- jinja2==3.1.6
- markdown==3.4.4
- markupsafe==2.1.5
- mergedeep==1.3.4
- mkdocs==1.5.3
- mkdocs-material==9.2.7
- mkdocs-material-extensions==1.2
- mypy==1.4.1
- mypy-extensions==1.0.0
- packaging==24.0
- paginate==0.5.7
- pathspec==0.11.2
- platformdirs==4.0.0
- pluggy==1.2.0
- pyflakes==3.0.1
- pygments==2.17.2
- pymdown-extensions==10.2.1
- pytest==7.4.4
- pytest-cov==4.1.0
- python-dateutil==2.9.0.post0
- python-multipart==0.0.8
- pytz==2025.2
- pyyaml==6.0.1
- pyyaml-env-tag==0.1
- regex==2022.10.31
- requests==2.31.0
- six==1.17.0
- sqlalchemy==1.4.54
- tomli==2.0.1
- typed-ast==1.5.5
- typing-extensions==4.7.1
- ujson==5.7.0
- urllib3==2.0.7
- watchdog==3.0.0
- zipp==3.15.0
prefix: /opt/conda/envs/starlette
| [
"tests/test_routing.py::test_url_for_with_double_mount"
] | [] | [
"tests/test_routing.py::test_router",
"tests/test_routing.py::test_route_converters",
"tests/test_routing.py::test_url_path_for",
"tests/test_routing.py::test_url_for",
"tests/test_routing.py::test_router_add_route",
"tests/test_routing.py::test_router_duplicate_path",
"tests/test_routing.py::test_router_add_websocket_route",
"tests/test_routing.py::test_protocol_switch",
"tests/test_routing.py::test_mount_urls",
"tests/test_routing.py::test_reverse_mount_urls",
"tests/test_routing.py::test_mount_at_root",
"tests/test_routing.py::test_host_routing",
"tests/test_routing.py::test_host_reverse_urls",
"tests/test_routing.py::test_subdomain_routing",
"tests/test_routing.py::test_subdomain_reverse_urls",
"tests/test_routing.py::test_url_for_with_root_path"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,717 | 257 | [
"starlette/routing.py"
] |
|
ucfopen__canvasapi-333 | e2a04a4731f69699762915cbc5563057849f7141 | 2019-11-01 16:01:40 | 62aa3eb8045f4a5f5edeb16fafad3fdf44bff624 | coveralls:
[](https://coveralls.io/builds/26860046)
Coverage remained the same at 100.0% when pulling **41c3d2a363854ce14997492e80b95193e0f89c3c on ctcuff:issue/332-list-assignments** into **e2a04a4731f69699762915cbc5563057849f7141 on ucfopen:develop**.
| diff --git a/canvasapi/assignment.py b/canvasapi/assignment.py
index 6fb5155..3637e8f 100644
--- a/canvasapi/assignment.py
+++ b/canvasapi/assignment.py
@@ -251,6 +251,26 @@ class Assignment(CanvasObject):
for extension in extension_list
]
+ def submissions_bulk_update(self, **kwargs):
+ """
+ Update the grading and comments on multiple student's assignment
+ submissions in an asynchronous job.
+
+ :calls: `POST /api/v1/courses/:course_id/assignments/:assignment_id/ \
+ submissions/update_grades \
+ <https://canvas.instructure.com/doc/api/submissions.html#method.submissions_api.bulk_update>`_
+
+ :rtype: :class:`canvasapi.progress.Progress`
+ """
+ response = self._requester.request(
+ "POST",
+ "courses/{}/assignments/{}/submissions/update_grades".format(
+ self.course_id, self.id
+ ),
+ _kwargs=combine_kwargs(**kwargs),
+ )
+ return Progress(self._requester, response.json())
+
def submit(self, submission, file=None, **kwargs):
"""
Makes a submission for an assignment.
@@ -295,26 +315,6 @@ class Assignment(CanvasObject):
return Submission(self._requester, response_json)
- def submissions_bulk_update(self, **kwargs):
- """
- Update the grading and comments on multiple student's assignment
- submissions in an asynchronous job.
-
- :calls: `POST /api/v1/courses/:course_id/assignments/:assignment_id/ \
- submissions/update_grades \
- <https://canvas.instructure.com/doc/api/submissions.html#method.submissions_api.bulk_update>`_
-
- :rtype: :class:`canvasapi.progress.Progress`
- """
- response = self._requester.request(
- "POST",
- "courses/{}/assignments/{}/submissions/update_grades".format(
- self.course_id, self.id
- ),
- _kwargs=combine_kwargs(**kwargs),
- )
- return Progress(self._requester, response.json())
-
def upload_to_submission(self, file, user="self", **kwargs):
"""
Upload a file to a submission.
@@ -356,40 +356,40 @@ class AssignmentGroup(CanvasObject):
def __str__(self):
return "{} ({})".format(self.name, self.id)
- def edit(self, **kwargs):
+ def delete(self, **kwargs):
"""
- Modify this assignment group.
+ Delete this assignment.
- :calls: `PUT /api/v1/courses/:course_id/assignment_groups/:assignment_group_id \
- <https://canvas.instructure.com/doc/api/assignment_groups.html#method.assignment_groups_api.update>`_
+ :calls: `DELETE /api/v1/courses/:course_id/assignment_groups/:assignment_group_id \
+ <https://canvas.instructure.com/doc/api/assignment_groups.html#method.assignment_groups_api.destroy>`_
:rtype: :class:`canvasapi.assignment.AssignmentGroup`
"""
response = self._requester.request(
- "PUT",
+ "DELETE",
"courses/{}/assignment_groups/{}".format(self.course_id, self.id),
_kwargs=combine_kwargs(**kwargs),
)
-
- if "name" in response.json():
- super(AssignmentGroup, self).set_attributes(response.json())
-
return AssignmentGroup(self._requester, response.json())
- def delete(self, **kwargs):
+ def edit(self, **kwargs):
"""
- Delete this assignment.
+ Modify this assignment group.
- :calls: `DELETE /api/v1/courses/:course_id/assignment_groups/:assignment_group_id \
- <https://canvas.instructure.com/doc/api/assignment_groups.html#method.assignment_groups_api.destroy>`_
+ :calls: `PUT /api/v1/courses/:course_id/assignment_groups/:assignment_group_id \
+ <https://canvas.instructure.com/doc/api/assignment_groups.html#method.assignment_groups_api.update>`_
:rtype: :class:`canvasapi.assignment.AssignmentGroup`
"""
response = self._requester.request(
- "DELETE",
+ "PUT",
"courses/{}/assignment_groups/{}".format(self.course_id, self.id),
_kwargs=combine_kwargs(**kwargs),
)
+
+ if "name" in response.json():
+ super(AssignmentGroup, self).set_attributes(response.json())
+
return AssignmentGroup(self._requester, response.json())
diff --git a/canvasapi/course.py b/canvasapi/course.py
index 3ce4983..b782347 100644
--- a/canvasapi/course.py
+++ b/canvasapi/course.py
@@ -4,6 +4,7 @@ import warnings
from six import python_2_unicode_compatible, text_type, string_types
+from canvasapi.assignment import Assignment, AssignmentGroup
from canvasapi.blueprint import BlueprintSubscription
from canvasapi.canvas_object import CanvasObject
from canvasapi.collaboration import Collaboration
@@ -733,6 +734,34 @@ class Course(CanvasObject):
_kwargs=combine_kwargs(**kwargs),
)
+ def get_assignments_for_group(self, assignment_group, **kwargs):
+ """
+ Returns a paginated list of assignments for the given assignment group
+
+ :calls: `GET /api/v1/courses/:course_id/assignment_groups/:assignment_group_id/assignments\
+ <https://canvas.instructure.com/doc/api/assignments.html#method.assignments_api.index>`_
+
+ :param assignment_group: The object or id of the assignment group
+ :type assignment_group: :class: `canvasapi.assignment.AssignmentGroup` or int
+
+ :rtype: :class:`canvasapi.paginated_list.PaginatedList` of
+ :class:`canvasapi.assignment.Assignment`
+ """
+
+ assignment_group_id = obj_or_id(
+ assignment_group, "assignment_group", (AssignmentGroup,)
+ )
+
+ return PaginatedList(
+ Assignment,
+ self._requester,
+ "GET",
+ "courses/{}/assignment_groups/{}/assignments".format(
+ self.id, assignment_group_id
+ ),
+ _kwargs=combine_kwargs(**kwargs),
+ )
+
def get_blueprint(self, template="default", **kwargs):
"""
Return the blueprint of a given ID.
| List Assignments (AssignmentGroup)
**What resource needs additional coverage?**
List Assignments (AssignmentGroup)
**What endpoints need to be covered?**
https://canvas.instructure.com/doc/api/assignments.html#method.assignments_api.index
- [ ] List assignments (the second one)
We covered getting all the assignments in a course already, but need to handle the case for AssignmentGroups | ucfopen/canvasapi | diff --git a/tests/fixtures/course.json b/tests/fixtures/course.json
index f00bb64..06bd347 100644
--- a/tests/fixtures/course.json
+++ b/tests/fixtures/course.json
@@ -146,6 +146,25 @@
},
"status_code": 200
},
+ "get_assignments_for_group": {
+ "method": "GET",
+ "endpoint": "courses/1/assignment_groups/5/assignments",
+ "data": [
+ {
+ "id": 3,
+ "course_id": 1,
+ "name": "Assignment 1",
+ "description": "Do this assignment"
+ },
+ {
+ "id": 4,
+ "course_id": 1,
+ "name": "Assignment 2",
+ "description": "Do this assignment too"
+ }
+ ],
+ "status_code": 200
+ },
"get_blueprint": {
"method": "GET",
"endpoint": "courses/1/blueprint_templates/1",
diff --git a/tests/test_course.py b/tests/test_course.py
index 0d1a5ee..83773d8 100644
--- a/tests/test_course.py
+++ b/tests/test_course.py
@@ -723,6 +723,43 @@ class TestCourse(unittest.TestCase):
self.assertTrue(hasattr(assignment_group_by_obj, "course_id"))
self.assertEqual(assignment_group_by_obj.course_id, 1)
+ # get_assignments_for_group()
+ def test_get_assignments_for_group(self, m):
+ register_uris(
+ {
+ "course": ["get_assignments_for_group"],
+ "assignment": ["get_assignment_group"],
+ },
+ m,
+ )
+
+ assignment_group_obj = self.course.get_assignment_group(5)
+ response = self.course.get_assignments_for_group(5)
+ assignments = [assignment for assignment in response]
+
+ self.assertIsInstance(response, PaginatedList)
+
+ for assignment in assignments:
+ self.assertIsInstance(assignment, Assignment)
+ self.assertTrue(hasattr(assignment, "id"))
+ self.assertTrue(hasattr(assignment, "name"))
+ self.assertTrue(hasattr(assignment, "course_id"))
+ self.assertTrue(hasattr(assignment, "description"))
+ self.assertEqual(assignment.course_id, self.course.id)
+
+ response = self.course.get_assignments_for_group(assignment_group_obj)
+ assignments = [assignment for assignment in response]
+
+ self.assertIsInstance(response, PaginatedList)
+
+ for assignment in assignments:
+ self.assertIsInstance(assignment, Assignment)
+ self.assertTrue(hasattr(assignment, "id"))
+ self.assertTrue(hasattr(assignment, "name"))
+ self.assertTrue(hasattr(assignment, "course_id"))
+ self.assertTrue(hasattr(assignment, "description"))
+ self.assertEqual(assignment.course_id, self.course.id)
+
# list_assignment_groups()
def test_list_assignment_groups(self, m):
register_uris(
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 2
} | 0.14 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"requests_mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/ucfopen/canvasapi.git@e2a04a4731f69699762915cbc5563057849f7141#egg=canvasapi
certifi==2025.1.31
charset-normalizer==3.4.1
coverage==7.8.0
exceptiongroup==1.2.2
execnet==2.1.1
idna==3.10
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
pytest-cov==6.0.0
pytest-xdist==3.6.1
pytz==2025.2
requests==2.32.3
requests-mock==1.12.1
six==1.17.0
tomli==2.2.1
urllib3==2.3.0
| name: canvasapi
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- certifi==2025.1.31
- charset-normalizer==3.4.1
- coverage==7.8.0
- exceptiongroup==1.2.2
- execnet==2.1.1
- idna==3.10
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-cov==6.0.0
- pytest-xdist==3.6.1
- pytz==2025.2
- requests==2.32.3
- requests-mock==1.12.1
- six==1.17.0
- tomli==2.2.1
- urllib3==2.3.0
prefix: /opt/conda/envs/canvasapi
| [
"tests/test_course.py::TestCourse::test_get_assignments_for_group"
] | [] | [
"tests/test_course.py::TestCourse::test__str__",
"tests/test_course.py::TestCourse::test_add_grading_standards",
"tests/test_course.py::TestCourse::test_add_grading_standards_empty_list",
"tests/test_course.py::TestCourse::test_add_grading_standards_missing_name_key",
"tests/test_course.py::TestCourse::test_add_grading_standards_missing_value_key",
"tests/test_course.py::TestCourse::test_add_grading_standards_non_dict_list",
"tests/test_course.py::TestCourse::test_conclude",
"tests/test_course.py::TestCourse::test_create_assignment",
"tests/test_course.py::TestCourse::test_create_assignment_fail",
"tests/test_course.py::TestCourse::test_create_assignment_group",
"tests/test_course.py::TestCourse::test_create_assignment_overrides",
"tests/test_course.py::TestCourse::test_create_content_migration",
"tests/test_course.py::TestCourse::test_create_content_migration_bad_migration_type",
"tests/test_course.py::TestCourse::test_create_content_migration_migrator",
"tests/test_course.py::TestCourse::test_create_course_section",
"tests/test_course.py::TestCourse::test_create_discussion_topic",
"tests/test_course.py::TestCourse::test_create_epub_export",
"tests/test_course.py::TestCourse::test_create_external_feed",
"tests/test_course.py::TestCourse::test_create_external_tool",
"tests/test_course.py::TestCourse::test_create_folder",
"tests/test_course.py::TestCourse::test_create_group_category",
"tests/test_course.py::TestCourse::test_create_module",
"tests/test_course.py::TestCourse::test_create_module_fail",
"tests/test_course.py::TestCourse::test_create_page",
"tests/test_course.py::TestCourse::test_create_page_fail",
"tests/test_course.py::TestCourse::test_create_quiz",
"tests/test_course.py::TestCourse::test_create_quiz_fail",
"tests/test_course.py::TestCourse::test_create_rubric_association",
"tests/test_course.py::TestCourse::test_create_rubric_no_association",
"tests/test_course.py::TestCourse::test_create_rubric_with_association",
"tests/test_course.py::TestCourse::test_delete",
"tests/test_course.py::TestCourse::test_delete_external_feed",
"tests/test_course.py::TestCourse::test_edit_front_page",
"tests/test_course.py::TestCourse::test_enroll_user",
"tests/test_course.py::TestCourse::test_export_content",
"tests/test_course.py::TestCourse::test_get__enabled_features",
"tests/test_course.py::TestCourse::test_get_assignment",
"tests/test_course.py::TestCourse::test_get_assignment_group",
"tests/test_course.py::TestCourse::test_get_assignment_groups",
"tests/test_course.py::TestCourse::test_get_assignment_overrides",
"tests/test_course.py::TestCourse::test_get_assignments",
"tests/test_course.py::TestCourse::test_get_blueprint",
"tests/test_course.py::TestCourse::test_get_blueprint_default",
"tests/test_course.py::TestCourse::test_get_collaborations",
"tests/test_course.py::TestCourse::test_get_content_migration",
"tests/test_course.py::TestCourse::test_get_content_migrations",
"tests/test_course.py::TestCourse::test_get_course_level_assignment_data",
"tests/test_course.py::TestCourse::test_get_course_level_participation_data",
"tests/test_course.py::TestCourse::test_get_course_level_student_summary_data",
"tests/test_course.py::TestCourse::test_get_discussion_topic",
"tests/test_course.py::TestCourse::test_get_discussion_topics",
"tests/test_course.py::TestCourse::test_get_enrollments",
"tests/test_course.py::TestCourse::test_get_epub_export",
"tests/test_course.py::TestCourse::test_get_external_feeds",
"tests/test_course.py::TestCourse::test_get_external_tool",
"tests/test_course.py::TestCourse::test_get_external_tools",
"tests/test_course.py::TestCourse::test_get_feature_flag",
"tests/test_course.py::TestCourse::test_get_features",
"tests/test_course.py::TestCourse::test_get_file",
"tests/test_course.py::TestCourse::test_get_files",
"tests/test_course.py::TestCourse::test_get_folder",
"tests/test_course.py::TestCourse::test_get_folders",
"tests/test_course.py::TestCourse::test_get_full_discussion_topic",
"tests/test_course.py::TestCourse::test_get_grading_period",
"tests/test_course.py::TestCourse::test_get_grading_periods",
"tests/test_course.py::TestCourse::test_get_grading_standards",
"tests/test_course.py::TestCourse::test_get_group_categories",
"tests/test_course.py::TestCourse::test_get_groups",
"tests/test_course.py::TestCourse::test_get_licenses",
"tests/test_course.py::TestCourse::test_get_migration_systems",
"tests/test_course.py::TestCourse::test_get_module",
"tests/test_course.py::TestCourse::test_get_modules",
"tests/test_course.py::TestCourse::test_get_multiple_submissions",
"tests/test_course.py::TestCourse::test_get_multiple_submissions_grouped_false",
"tests/test_course.py::TestCourse::test_get_multiple_submissions_grouped_invalid",
"tests/test_course.py::TestCourse::test_get_multiple_submissions_grouped_true",
"tests/test_course.py::TestCourse::test_get_outcome_group",
"tests/test_course.py::TestCourse::test_get_outcome_groups_in_context",
"tests/test_course.py::TestCourse::test_get_outcome_import_status",
"tests/test_course.py::TestCourse::test_get_outcome_import_status_latest",
"tests/test_course.py::TestCourse::test_get_outcome_links_in_context",
"tests/test_course.py::TestCourse::test_get_outcome_result_rollups",
"tests/test_course.py::TestCourse::test_get_outcome_results",
"tests/test_course.py::TestCourse::test_get_page",
"tests/test_course.py::TestCourse::test_get_pages",
"tests/test_course.py::TestCourse::test_get_quiz",
"tests/test_course.py::TestCourse::test_get_quiz_fail",
"tests/test_course.py::TestCourse::test_get_quizzes",
"tests/test_course.py::TestCourse::test_get_recent_students",
"tests/test_course.py::TestCourse::test_get_root_outcome_group",
"tests/test_course.py::TestCourse::test_get_rubric",
"tests/test_course.py::TestCourse::test_get_rubrics",
"tests/test_course.py::TestCourse::test_get_section",
"tests/test_course.py::TestCourse::test_get_sections",
"tests/test_course.py::TestCourse::test_get_settings",
"tests/test_course.py::TestCourse::test_get_single_grading_standard",
"tests/test_course.py::TestCourse::test_get_submission",
"tests/test_course.py::TestCourse::test_get_tabs",
"tests/test_course.py::TestCourse::test_get_user",
"tests/test_course.py::TestCourse::test_get_user_id_type",
"tests/test_course.py::TestCourse::test_get_user_in_a_course_level_assignment_data",
"tests/test_course.py::TestCourse::test_get_user_in_a_course_level_messaging_data",
"tests/test_course.py::TestCourse::test_get_user_in_a_course_level_participation_data",
"tests/test_course.py::TestCourse::test_get_users",
"tests/test_course.py::TestCourse::test_import_outcome_binary",
"tests/test_course.py::TestCourse::test_import_outcome_filepath",
"tests/test_course.py::TestCourse::test_import_outcome_id",
"tests/test_course.py::TestCourse::test_import_outcome_ioerror",
"tests/test_course.py::TestCourse::test_list_assignment_groups",
"tests/test_course.py::TestCourse::test_list_blueprint_subscriptions",
"tests/test_course.py::TestCourse::test_list_content_exports",
"tests/test_course.py::TestCourse::test_list_external_feeds",
"tests/test_course.py::TestCourse::test_list_files",
"tests/test_course.py::TestCourse::test_list_folders",
"tests/test_course.py::TestCourse::test_list_gradeable_students",
"tests/test_course.py::TestCourse::test_list_group_categories",
"tests/test_course.py::TestCourse::test_list_groups",
"tests/test_course.py::TestCourse::test_list_multiple_submissions",
"tests/test_course.py::TestCourse::test_list_rubrics",
"tests/test_course.py::TestCourse::test_list_sections",
"tests/test_course.py::TestCourse::test_list_submissions",
"tests/test_course.py::TestCourse::test_list_tabs",
"tests/test_course.py::TestCourse::test_mark_submission_as_read",
"tests/test_course.py::TestCourse::test_mark_submission_as_unread",
"tests/test_course.py::TestCourse::test_preview_html",
"tests/test_course.py::TestCourse::test_remove_usage_rights",
"tests/test_course.py::TestCourse::test_reorder_pinned_topics",
"tests/test_course.py::TestCourse::test_reorder_pinned_topics_comma_separated_string",
"tests/test_course.py::TestCourse::test_reorder_pinned_topics_invalid_input",
"tests/test_course.py::TestCourse::test_reorder_pinned_topics_tuple",
"tests/test_course.py::TestCourse::test_reset",
"tests/test_course.py::TestCourse::test_set_extensions_empty_list",
"tests/test_course.py::TestCourse::test_set_extensions_missing_key",
"tests/test_course.py::TestCourse::test_set_extensions_non_dicts",
"tests/test_course.py::TestCourse::test_set_extensions_not_list",
"tests/test_course.py::TestCourse::test_set_quiz_extensions",
"tests/test_course.py::TestCourse::test_set_usage_rights",
"tests/test_course.py::TestCourse::test_show_content_export",
"tests/test_course.py::TestCourse::test_show_front_page",
"tests/test_course.py::TestCourse::test_submissions_bulk_update",
"tests/test_course.py::TestCourse::test_submit_assignment",
"tests/test_course.py::TestCourse::test_submit_assignment_fail",
"tests/test_course.py::TestCourse::test_update",
"tests/test_course.py::TestCourse::test_update_assignment_overrides",
"tests/test_course.py::TestCourse::test_update_settings",
"tests/test_course.py::TestCourse::test_update_submission",
"tests/test_course.py::TestCourse::test_update_tab",
"tests/test_course.py::TestCourse::test_upload",
"tests/test_course.py::TestCourseNickname::test__str__",
"tests/test_course.py::TestCourseNickname::test_remove"
] | [] | MIT License | 5,718 | 1,449 | [
"canvasapi/assignment.py",
"canvasapi/course.py"
] |
cekit__cekit-634 | 74df8c5258a3e843a82aa9ffa4b7c94781ee3433 | 2019-11-04 13:56:24 | 378208f15f138999b18f3ade603a755a77e8de8c | diff --git a/cekit/descriptor/osbs.py b/cekit/descriptor/osbs.py
index dcd23b5..dfee1da 100644
--- a/cekit/descriptor/osbs.py
+++ b/cekit/descriptor/osbs.py
@@ -98,7 +98,6 @@ class Configuration(Descriptor):
self.schema = configuration_schema
self.descriptor_path = descriptor_path
super(Configuration, self).__init__(descriptor)
- self.skip_merging = ['container', 'container_file']
if 'container' in self and 'container_file' in self:
raise CekitError('You cannot specify container and container_file together!')
diff --git a/cekit/descriptor/resource.py b/cekit/descriptor/resource.py
index 755cd73..a694c93 100644
--- a/cekit/descriptor/resource.py
+++ b/cekit/descriptor/resource.py
@@ -18,7 +18,7 @@ from cekit.config import Config
from cekit.crypto import SUPPORTED_HASH_ALGORITHMS, check_sum
from cekit.descriptor import Descriptor
from cekit.errors import CekitError
-from cekit.tools import get_brew_url, Map
+from cekit.tools import get_brew_url, Map, Chdir
logger = logging.getLogger('cekit')
config = Config()
@@ -437,10 +437,15 @@ class _GitResource(Resource):
return os.path.basename(descriptor.get('git', {}).get('url')).split(".", 1)[0]
def _copy_impl(self, target):
- cmd = ['git', 'clone', '--depth', '1', self.git.url, target, '-b',
- self.git.ref]
- logger.debug("Running '{}'".format(' '.join(cmd)))
+ cmd = ['git', 'clone', self.git.url, target]
+ logger.debug("Cloning Git repository: '{}'".format(' '.join(cmd)))
subprocess.check_output(cmd, stderr=subprocess.STDOUT)
+
+ with Chdir(target):
+ cmd = ['git', 'checkout', self.git.ref]
+ logger.debug("Checking out '{}' ref: '{}'".format(self.git.ref, ' '.join(cmd)))
+ subprocess.check_output(cmd, stderr=subprocess.STDOUT)
+
return target
| the git "ref:" argument should accept a valid git reference
**Describe the bug**
cekit permits specifying a git ref for a module, but this does not translate into a git ref underneath, but must actually be a tag or branch name (valid argument to `git clone -b`). It would be nicer if this could be a real git ref (e.g. be123a80c5497a595e2a997a0f1dbe14c9e3442e)
**To reproduce**
Attempt to set the git ref for an external module to a ref that is not a branch or tag name.
**Expected behavior**
Remote repository cloned and local branch set up to reference the ref specified, even if it was a sha1-id.
- CEKit version: 3.4.0 | cekit/cekit | diff --git a/tests/test_integ_builder_osbs.py b/tests/test_integ_builder_osbs.py
index 09a181f..a717f3d 100644
--- a/tests/test_integ_builder_osbs.py
+++ b/tests/test_integ_builder_osbs.py
@@ -59,10 +59,13 @@ def run_cekit(cwd,
return result
-def run_osbs(descriptor, image_dir, mocker, return_code=0, build_command=None):
+def run_osbs(descriptor, image_dir, mocker, return_code=0, build_command=None, general_command=None):
if build_command is None:
build_command = ['build', 'osbs']
+ if general_command is None:
+ general_command = ['--redhat']
+
# We are mocking it, so do not require it at test time
mocker.patch('cekit.builders.osbs.OSBSBuilder.dependencies', return_value={})
mocker.patch('cekit.builders.osbs.OSBSBuilder._wait_for_osbs_task')
@@ -82,14 +85,10 @@ def run_osbs(descriptor, image_dir, mocker, return_code=0, build_command=None):
b"UUU"
])
- with open(os.path.join(image_dir, 'config'), 'w') as fd:
- fd.write("[common]\n")
- fd.write("redhat = True")
-
with open(os.path.join(image_dir, 'image.yaml'), 'w') as fd:
yaml.dump(descriptor, fd, default_flow_style=False)
- return run_cekit(image_dir, ['-v',
+ return run_cekit(image_dir, general_command + ['-v',
'--work-dir', image_dir,
'--config', 'config'] + build_command,
return_code=return_code)
@@ -521,3 +520,37 @@ def test_osbs_builder_with_fetch_artifacts_file_removal(tmpdir, mocker, caplog):
assert not os.path.exists(os.path.join(str(tmpdir), 'osbs', 'repo', 'fetch-artifacts-url.yaml'))
assert "Removing old 'fetch-artifacts-url.yaml' file" in caplog.text
+
+
[email protected]('flag', [[], ['--redhat']])
+def test_osbs_builder_container_yaml_existence(tmpdir, mocker, caplog, flag):
+ """
+ Make sure that the osbs section is properly merged.
+ The evidence is that the container.yaml file is generated.
+
+ https://github.com/cekit/cekit/issues/631
+ """
+
+ caplog.set_level(logging.DEBUG, logger="cekit")
+
+ mocker.patch('cekit.tools.decision', return_value=True)
+ mocker.patch('cekit.descriptor.resource.urlopen')
+ mocker.patch('cekit.generator.osbs.get_brew_url', return_value='http://random.url/path')
+ mocker.patch.object(subprocess, 'check_output')
+ mocker.patch('cekit.builders.osbs.DistGit.push')
+
+ tmpdir.mkdir('osbs').mkdir('repo')
+
+ with Chdir(os.path.join(str(tmpdir), 'osbs', 'repo')):
+ subprocess.call(["git", "init"])
+ subprocess.call(["touch", "file"])
+ subprocess.call(["git", "add", "file"])
+ subprocess.call(["git", "commit", "-m", "Dummy"])
+
+ descriptor = image_descriptor.copy()
+
+ descriptor["osbs"]["configuration"] = {'container': {'compose': {'pulp_repos': True}}}
+
+ run_osbs(descriptor, str(tmpdir), mocker, general_command=flag)
+
+ assert os.path.exists(os.path.join(str(tmpdir), 'osbs', 'repo', 'container.yaml'))
diff --git a/tests/test_unit_resource.py b/tests/test_unit_resource.py
index 3092044..2a9bd12 100644
--- a/tests/test_unit_resource.py
+++ b/tests/test_unit_resource.py
@@ -7,6 +7,12 @@ from cekit.descriptor import Image, Overrides
from cekit.descriptor.resource import create_resource
from cekit.config import Config
from cekit.errors import CekitError
+from cekit.tools import Chdir
+
+try:
+ from unittest.mock import call
+except ImportError:
+ from mock import call
config = Config()
@@ -21,13 +27,18 @@ def setup_function(function):
def test_repository_dir_is_constructed_properly(mocker):
mocker.patch('subprocess.check_output')
mocker.patch('os.path.isdir', ret='True')
+ mocker.patch('cekit.descriptor.resource.Chdir', autospec=True)
+
res = create_resource({'git': {'url': 'http://host.com/url/repo.git', 'ref': 'ref'}})
+
assert res.copy('dir') == 'dir/repo'
def test_repository_dir_uses_name_if_defined(mocker):
mocker.patch('subprocess.check_output')
mocker.patch('os.path.isdir', ret='True')
+ mocker.patch('cekit.descriptor.resource.Chdir', autospec=True)
+
res = create_resource(
{'name': 'some-id', 'git': {'url': 'http://host.com/url/repo.git', 'ref': 'ref'}})
assert res.copy('dir') == 'dir/some-id'
@@ -36,6 +47,8 @@ def test_repository_dir_uses_name_if_defined(mocker):
def test_repository_dir_uses_target_if_defined(mocker):
mocker.patch('subprocess.check_output')
mocker.patch('os.path.isdir', ret='True')
+ mocker.patch('cekit.descriptor.resource.Chdir', autospec=True)
+
res = create_resource(
{'target': 'some-name', 'git': {'url': 'http://host.com/url/repo.git', 'ref': 'ref'}})
assert res.copy('dir') == 'dir/some-name'
@@ -44,17 +57,14 @@ def test_repository_dir_uses_target_if_defined(mocker):
def test_git_clone(mocker):
mock = mocker.patch('subprocess.check_output')
mocker.patch('os.path.isdir', ret='True')
+ mocker.patch('cekit.descriptor.resource.Chdir', autospec=True)
+
res = create_resource({'git': {'url': 'http://host.com/url/path.git', 'ref': 'ref'}})
res.copy('dir')
- mock.assert_called_with(['git',
- 'clone',
- '--depth',
- '1',
- 'http://host.com/url/path.git',
- 'dir/path',
- '-b',
- 'ref'],
- stderr=-2)
+ mock.assert_has_calls([
+ call(['git', 'clone', 'http://host.com/url/path.git', 'dir/path'], stderr=-2),
+ call(['git', 'checkout', 'ref'], stderr=-2)
+ ])
def get_res(mocker):
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_git_commit_hash",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 2
} | 3.5 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | -e git+https://github.com/cekit/cekit.git@74df8c5258a3e843a82aa9ffa4b7c94781ee3433#egg=cekit
click==8.1.8
colorlog==6.9.0
coverage==7.8.0
docopt==0.6.2
exceptiongroup==1.2.2
execnet==2.1.1
iniconfig==2.1.0
Jinja2==3.1.6
MarkupSafe==3.0.2
packaging==24.2
pluggy==1.5.0
pykwalify==1.8.0
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-xdist==3.6.1
python-dateutil==2.9.0.post0
PyYAML==6.0.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.12
six==1.17.0
tomli==2.2.1
typing_extensions==4.13.0
| name: cekit
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- click==8.1.8
- colorlog==6.9.0
- coverage==7.8.0
- docopt==0.6.2
- exceptiongroup==1.2.2
- execnet==2.1.1
- iniconfig==2.1.0
- jinja2==3.1.6
- markupsafe==3.0.2
- packaging==24.2
- pluggy==1.5.0
- pykwalify==1.8.0
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-xdist==3.6.1
- python-dateutil==2.9.0.post0
- pyyaml==6.0.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.12
- six==1.17.0
- tomli==2.2.1
- typing-extensions==4.13.0
prefix: /opt/conda/envs/cekit
| [
"tests/test_unit_resource.py::test_repository_dir_is_constructed_properly",
"tests/test_unit_resource.py::test_repository_dir_uses_name_if_defined",
"tests/test_unit_resource.py::test_repository_dir_uses_target_if_defined",
"tests/test_unit_resource.py::test_git_clone"
] | [
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_asume_yes",
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_push_with_sync_only",
"tests/test_integ_builder_osbs.py::test_osbs_builder_kick_build_without_push",
"tests/test_integ_builder_osbs.py::test_osbs_builder_kick_build_with_push",
"tests/test_integ_builder_osbs.py::test_osbs_builder_add_help_file",
"tests/test_integ_builder_osbs.py::test_osbs_builder_add_extra_files",
"tests/test_integ_builder_osbs.py::test_osbs_builder_add_extra_files_from_custom_dir",
"tests/test_integ_builder_osbs.py::test_osbs_builder_extra_default",
"tests/test_integ_builder_osbs.py::test_osbs_builder_add_files_to_dist_git_without_dotgit_directory",
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_koji_target_based_on_branch",
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_koji_target_in_descriptor",
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_fetch_artifacts_file_creation",
"tests/test_integ_builder_osbs.py::test_osbs_builder_with_fetch_artifacts_file_removal",
"tests/test_integ_builder_osbs.py::test_osbs_builder_container_yaml_existence[flag0]",
"tests/test_integ_builder_osbs.py::test_osbs_builder_container_yaml_existence[flag1]"
] | [
"tests/test_unit_resource.py::test_fetching_with_ssl_verify",
"tests/test_unit_resource.py::test_fetching_disable_ssl_verify",
"tests/test_unit_resource.py::test_fetching_bad_status_code",
"tests/test_unit_resource.py::test_fetching_file_exists_but_used_as_is",
"tests/test_unit_resource.py::test_fetching_file_exists_fetched_again",
"tests/test_unit_resource.py::test_fetching_file_exists_no_hash_fetched_again",
"tests/test_unit_resource.py::test_generated_url_without_cacher",
"tests/test_unit_resource.py::test_resource_verify",
"tests/test_unit_resource.py::test_generated_url_with_cacher",
"tests/test_unit_resource.py::test_path_resource_absolute",
"tests/test_unit_resource.py::test_path_resource_relative",
"tests/test_unit_resource.py::test_path_local_existing_resource_no_cacher_use",
"tests/test_unit_resource.py::test_path_local_non_existing_resource_with_cacher_use",
"tests/test_unit_resource.py::test_url_resource_download_cleanup_after_failure",
"tests/test_unit_resource.py::test_copy_plain_resource_with_cacher",
"tests/test_unit_resource.py::test_copy_plain_resource_from_brew",
"tests/test_unit_resource.py::test_overide_resource_remove_chksum"
] | [] | MIT License | 5,738 | 529 | [
"cekit/descriptor/osbs.py",
"cekit/descriptor/resource.py"
] |
|
lexibank__pylexibank-152 | 7a360022dbae0311095feb903bb6924e1b659c3a | 2019-11-04 14:44:29 | 7a360022dbae0311095feb903bb6924e1b659c3a | xrotwang: @tresoldi I added tests, so backwards compat should be provided for. Merging now.
codecov-io: # [Codecov](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=h1) Report
> Merging [#152](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=desc) into [master](https://codecov.io/gh/lexibank/pylexibank/commit/7a360022dbae0311095feb903bb6924e1b659c3a?src=pr&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #152 +/- ##
==========================================
+ Coverage 97.34% 97.36% +0.02%
==========================================
Files 26 26
Lines 1467 1481 +14
==========================================
+ Hits 1428 1442 +14
Misses 39 39
```
| [Impacted Files](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/pylexibank/dataset.py](https://codecov.io/gh/lexibank/pylexibank/pull/152/diff?src=pr&el=tree#diff-c3JjL3B5bGV4aWJhbmsvZGF0YXNldC5weQ==) | `98.13% <100%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=footer). Last update [7a36002...070af60](https://codecov.io/gh/lexibank/pylexibank/pull/152?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/src/pylexibank/dataset.py b/src/pylexibank/dataset.py
index c2c596b..078b376 100644
--- a/src/pylexibank/dataset.py
+++ b/src/pylexibank/dataset.py
@@ -139,19 +139,37 @@ class Dataset(BaseDataset):
- `kw` may be used to pass any context info to the tokenizer, when called
explicitly.
"""
- profile = self.dir / 'etc' / 'orthography.tsv'
- if profile.exists():
- profile = Profile.from_file(str(profile), form='NFC')
- default_spec = list(next(iter(profile.graphemes.values())).keys())
+ def get_tokenizer(profile_):
+ profile_ = Profile.from_file(str(profile_), form='NFC')
+ default_spec = list(next(iter(profile_.graphemes.values())).keys())
for grapheme in ['^', '$']:
- if grapheme not in profile.graphemes:
- profile.graphemes[grapheme] = {k: None for k in default_spec}
- profile.tree = Tree(list(profile.graphemes.keys()))
- tokenizer = Tokenizer(profile=profile, errors_replace=lambda c: '<{0}>'.format(c))
+ if grapheme not in profile_.graphemes:
+ profile_.graphemes[grapheme] = {k: None for k in default_spec}
+ profile_.tree = Tree(list(profile_.graphemes.keys()))
+ return Tokenizer(profile=profile_, errors_replace=lambda c: '<{0}>'.format(c))
+
+ tokenizers = {}
+ profile = self.etc_dir / 'orthography.tsv'
+ profile_dir = self.etc_dir / 'orthography'
+ if profile.exists():
+ tokenizers[None] = get_tokenizer(profile)
+ if profile_dir.exists() and profile_dir.is_dir():
+ for profile in profile_dir.glob('*.tsv'):
+ tokenizers[profile.stem] = get_tokenizer(profile)
+ if tokenizers:
def _tokenizer(item, string, **kw):
kw.setdefault("column", "IPA")
kw.setdefault("separator", " + ")
+ profile = kw.pop('profile', None)
+ if profile:
+ tokenizer = tokenizers[profile]
+ elif isinstance(item, dict) \
+ and 'Language_ID' in item \
+ and item['Language_ID'] in tokenizers:
+ tokenizer = tokenizers[item['Language_ID']]
+ else:
+ tokenizer = tokenizers[None]
return tokenizer(unicodedata.normalize('NFC', '^' + string + '$'), **kw).split()
return _tokenizer
| Support multiple profiles
Do we have a standard way of supporting multiple orthographic profiles? Or should we try to always use a single file, even when the code and the profile become more complex?
My question concerns mostly WOLD, with 41 very different profiles (one per language). I couldn't find a non-hacky way (involving lambdas and the like) to tell pylexibank "use the language-id of each row as a tokenizer-id".
The `_tokenizer` in Dataset does accept a column name via `**kwargs` (https://github.com/lexibank/pylexibank/blob/021f37481d37b472715033033773543273ed301f/src/pylexibank/dataset.py#L152), but there are conflicts when a grapheme is missing from one or more profiles (e.g., in some language "ch" is just "c"+"h", and even redundantly listing all missing graphemes in all languages is not enough, as it can lead to inconsistencies, like consuming characters in the stream that are not supposed to be consumed).
Besides, it doesn't seem that LexibankWriter allows me to pass such information (cf. https://github.com/lexibank/pylexibank/blob/021f37481d37b472715033033773543273ed301f/src/pylexibank/cldf.py#L93), as it only takes `item` (the **kw passed to `add_form`) and `string` (the value to be segmented).
Currently, while refining WOLD, I'm manually loading all profiles in a `tokenizer` dictionary, inside `cmd_makecldf()`, and calling it directly:
```python
for row in progressbar(lexemes_rows):
args.writer.add_form_with_segments(
Language_ID=row["Language_ID"],
Parameter_ID=row["Parameter_ID"],
Form=row["Form"],
Value=row["Form"],
Segments=tokenizer[row["Language_ID"]]({}, row["Form"]),
(...)
```
Note that this is **not** urgent: CLICS3 won't depend on this (we don't need segmented data, and I'll get back to it after we resubmit). | lexibank/pylexibank | diff --git a/tests/test_dataset.py b/tests/test_dataset.py
index 4d9f66b..a656b42 100644
--- a/tests/test_dataset.py
+++ b/tests/test_dataset.py
@@ -1,12 +1,11 @@
import sys
import json
-import argparse
import importlib
+import pathlib
import pytest
from clldutils.path import sys_path
from cldfbench.dataset import NOOP
-from pyclts import TranscriptionSystem
from pylexibank import Lexeme, Dataset
from pylexibank.dataset import Unmapped
@@ -44,6 +43,26 @@ def test_invalid_dataset():
Test()
+def test_Dataset_tokenizer(tmpdir):
+ etc = pathlib.Path(str(tmpdir)).joinpath('etc')
+ etc.mkdir()
+ orth_dir = etc.joinpath('orthography')
+ orth_dir.mkdir()
+ orth_dir.joinpath('l1.tsv').write_text('Grapheme\tIPA\na\tb')
+
+ class DS(Dataset):
+ id = '1'
+ dir = etc.parent
+
+ ds = DS()
+ assert ds.tokenizer({}, 'a', profile='l1') == ['b']
+ assert ds.tokenizer({'Language_ID': 'l1'}, 'a') == ['b']
+
+ etc.joinpath('orthography.tsv').write_text('Grapheme\tIPA\na\tc')
+ ds = DS()
+ assert ds.tokenizer({}, 'a') == ['c']
+
+
def test_BaseDataset(mocker, repos):
class TestDataset(Dataset):
dir = repos / 'datasets' / 'test_dataset'
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 1
},
"num_modified_files": 1
} | 2.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | appdirs==1.4.4
attrs==25.3.0
babel==2.17.0
backports.tarfile==1.2.0
beautifulsoup4==4.13.3
bibtexparser==2.0.0b8
bs4==0.0.2
cdstarcat==1.5.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
cldfbench==1.14.1
cldfcatalog==1.6.0
cldfzenodo==2.1.2
clldutils==3.24.2
colorama==0.4.6
colorlog==6.9.0
commonnexus==1.9.2
coverage==7.8.0
cryptography==44.0.2
csvw==3.5.1
docopt==0.6.2
docutils==0.21.2
et_xmlfile==2.0.0
exceptiongroup==1.2.2
flake8==7.2.0
gitdb==4.0.12
GitPython==3.1.44
greenlet==3.1.1
id==1.5.0
idna==3.10
importlib_metadata==8.6.1
iniconfig==2.1.0
isodate==0.7.2
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.1.0
jeepney==0.9.0
jmespath==1.0.1
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
keyring==25.6.0
language-tags==1.2.0
latexcodec==3.0.0
linglit==1.7.1
lingpy==2.6.13
lxml==5.3.1
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
mccabe==0.7.0
mdurl==0.1.2
more-itertools==10.6.0
nameparser==1.1.3
networkx==3.2.1
newick==1.10.0
nh3==0.2.21
numpy==2.0.2
openpyxl==3.1.5
packaging==24.2
platformdirs==4.3.7
pluggy==1.5.0
prompt_toolkit==3.0.50
pybtex==0.24.0
pycdstar==1.1.0
pycldf==1.41.0
pyclts==3.2.0
pycodestyle==2.13.0
pyconcepticon==3.1.0
pycountry==24.6.1
pycparser==2.22
pyflakes==3.3.2
pyglottolog==3.14.0
Pygments==2.19.1
pyigt==2.2.0
pylatexenc==2.10
-e git+https://github.com/lexibank/pylexibank.git@7a360022dbae0311095feb903bb6924e1b659c3a#egg=pylexibank
pyparsing==3.2.3
pytest==8.3.5
pytest-cov==6.0.0
python-dateutil==2.9.0.post0
python-frontmatter==1.1.0
python-nexus==2.9.0
PyYAML==6.0.2
RapidFuzz==3.12.2
rdflib==7.1.4
readme_renderer==44.0
referencing==0.36.2
regex==2024.11.6
requests==2.32.3
requests-toolbelt==1.0.0
rfc3986==1.5.0
rich==14.0.0
rpds-py==0.24.0
SecretStorage==3.3.3
segments==2.3.0
six==1.17.0
smmap==5.0.2
soupsieve==2.6
SQLAlchemy==1.4.54
tabulate==0.9.0
termcolor==3.0.0
TexSoup==0.3.1
thefuzz==0.22.1
tomli==2.2.1
tqdm==4.67.1
twine==6.1.0
typing_extensions==4.13.0
Unidecode==1.3.8
uritemplate==4.1.1
urllib3==2.3.0
wcwidth==0.2.13
Whoosh==2.7.4
xlrd==2.0.1
zenodoclient==0.5.1
zipp==3.21.0
| name: pylexibank
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- appdirs==1.4.4
- attrs==25.3.0
- babel==2.17.0
- backports-tarfile==1.2.0
- beautifulsoup4==4.13.3
- bibtexparser==2.0.0b8
- bs4==0.0.2
- cdstarcat==1.5.0
- certifi==2025.1.31
- cffi==1.17.1
- charset-normalizer==3.4.1
- cldfbench==1.14.1
- cldfcatalog==1.6.0
- cldfzenodo==2.1.2
- clldutils==3.24.2
- colorama==0.4.6
- colorlog==6.9.0
- commonnexus==1.9.2
- coverage==7.8.0
- cryptography==44.0.2
- csvw==3.5.1
- docopt==0.6.2
- docutils==0.21.2
- et-xmlfile==2.0.0
- exceptiongroup==1.2.2
- flake8==7.2.0
- gitdb==4.0.12
- gitpython==3.1.44
- greenlet==3.1.1
- id==1.5.0
- idna==3.10
- importlib-metadata==8.6.1
- iniconfig==2.1.0
- isodate==0.7.2
- jaraco-classes==3.4.0
- jaraco-context==6.0.1
- jaraco-functools==4.1.0
- jeepney==0.9.0
- jmespath==1.0.1
- jsonschema==4.23.0
- jsonschema-specifications==2024.10.1
- keyring==25.6.0
- language-tags==1.2.0
- latexcodec==3.0.0
- linglit==1.7.1
- lingpy==2.6.13
- lxml==5.3.1
- markdown==3.7
- markdown-it-py==3.0.0
- markupsafe==3.0.2
- mccabe==0.7.0
- mdurl==0.1.2
- more-itertools==10.6.0
- nameparser==1.1.3
- networkx==3.2.1
- newick==1.10.0
- nh3==0.2.21
- numpy==2.0.2
- openpyxl==3.1.5
- packaging==24.2
- platformdirs==4.3.7
- pluggy==1.5.0
- prompt-toolkit==3.0.50
- pybtex==0.24.0
- pycdstar==1.1.0
- pycldf==1.41.0
- pyclts==3.2.0
- pycodestyle==2.13.0
- pyconcepticon==3.1.0
- pycountry==24.6.1
- pycparser==2.22
- pyflakes==3.3.2
- pyglottolog==3.14.0
- pygments==2.19.1
- pyigt==2.2.0
- pylatexenc==2.10
- pyparsing==3.2.3
- pytest==8.3.5
- pytest-cov==6.0.0
- python-dateutil==2.9.0.post0
- python-frontmatter==1.1.0
- python-nexus==2.9.0
- pyyaml==6.0.2
- rapidfuzz==3.12.2
- rdflib==7.1.4
- readme-renderer==44.0
- referencing==0.36.2
- regex==2024.11.6
- requests==2.32.3
- requests-toolbelt==1.0.0
- rfc3986==1.5.0
- rich==14.0.0
- rpds-py==0.24.0
- secretstorage==3.3.3
- segments==2.3.0
- six==1.17.0
- smmap==5.0.2
- soupsieve==2.6
- sqlalchemy==1.4.54
- tabulate==0.9.0
- termcolor==3.0.0
- texsoup==0.3.1
- thefuzz==0.22.1
- tomli==2.2.1
- tqdm==4.67.1
- twine==6.1.0
- typing-extensions==4.13.0
- unidecode==1.3.8
- uritemplate==4.1.1
- urllib3==2.3.0
- wcwidth==0.2.13
- whoosh==2.7.4
- xlrd==2.0.1
- zenodoclient==0.5.1
- zipp==3.21.0
prefix: /opt/conda/envs/pylexibank
| [
"tests/test_dataset.py::test_Dataset_tokenizer"
] | [] | [
"tests/test_dataset.py::test_Item",
"tests/test_dataset.py::test_Unmapped",
"tests/test_dataset.py::test_invalid_dataset",
"tests/test_dataset.py::test_tokenizer[^b$-b]",
"tests/test_dataset.py::test_tokenizer[aba-c",
"tests/test_dataset.py::test_tokenizer[bab-b"
] | [] | Apache License 2.0 | 5,739 | 595 | [
"src/pylexibank/dataset.py"
] |
omry__omegaconf-53 | db110e3b6effc337fb2dc11fe27b62f24b2f49c2 | 2019-11-05 09:00:30 | db110e3b6effc337fb2dc11fe27b62f24b2f49c2 | coveralls: ## Pull Request Test Coverage Report for [Build 256](https://coveralls.io/builds/26766909)
* **4** of **4** **(100.0%)** changed or added relevant lines in **2** files are covered.
* No unchanged relevant lines lost coverage.
* Overall coverage decreased (**-0.08%**) to **99.19%**
---
| Totals | [](https://coveralls.io/builds/26766909) |
| :-- | --: |
| Change from base [Build 254](https://coveralls.io/builds/26734871): | -0.08% |
| Covered Lines: | 2449 |
| Relevant Lines: | 2469 |
---
##### 💛 - [Coveralls](https://coveralls.io)
| diff --git a/omegaconf/config.py b/omegaconf/config.py
index 0a8995d..a1b3f14 100644
--- a/omegaconf/config.py
+++ b/omegaconf/config.py
@@ -83,6 +83,13 @@ class Config(object):
assert value is None or isinstance(value, bool)
self.__dict__['flags'][flag] = value
+ def _get_node_flag(self, flag):
+ """
+ :param flag: flag to inspect
+ :return: the state of the flag on this node.
+ """
+ return self.__dict__['flags'][flag]
+
def _get_flag(self, flag):
"""
Returns True if this config node flag is set
diff --git a/omegaconf/omegaconf.py b/omegaconf/omegaconf.py
index 03e908b..df3a92b 100644
--- a/omegaconf/omegaconf.py
+++ b/omegaconf/omegaconf.py
@@ -228,7 +228,8 @@ def flag_override(config, name, value):
# noinspection PyProtectedMember
@contextmanager
def read_write(config):
- prev_state = OmegaConf.is_readonly(config)
+ # noinspection PyProtectedMember
+ prev_state = config._get_node_flag("readonly")
try:
OmegaConf.set_readonly(config, False)
yield config
@@ -238,7 +239,8 @@ def read_write(config):
@contextmanager
def open_dict(config):
- prev_state = OmegaConf.is_struct(config)
+ # noinspection PyProtectedMember
+ prev_state = config._get_node_flag("struct")
try:
OmegaConf.set_struct(config, False)
yield config
| open_dict change value to the inherited value
for a nested config:
```yaml
foo:
bar:
baz: 10
```
if foo is struct, and we do
```python
with open_dict(cfg.foo.bar):
bar.baz = 20
```
the struct flag of foo.bar would be changed to True. | omry/omegaconf | diff --git a/tests/test_base_config.py b/tests/test_base_config.py
index 713ff52..b7112ed 100644
--- a/tests/test_base_config.py
+++ b/tests/test_base_config.py
@@ -275,3 +275,19 @@ def test_struct_override(src, func, expectation):
with does_not_raise():
with open_dict(c):
func(c)
+
+
[email protected]("flag_name,ctx", [("struct", open_dict), ("readonly", read_write)])
+def test_open_dict_restore(flag_name, ctx):
+ """
+ Tests that internal flags are restored properly when applying context on a child node
+ """
+ cfg = OmegaConf.create({"foo": {"bar": 10}})
+ cfg._set_flag(flag_name, True)
+ assert cfg._get_node_flag(flag_name)
+ assert not cfg.foo._get_node_flag(flag_name)
+ with ctx(cfg.foo):
+ cfg.foo.bar = 20
+ assert cfg._get_node_flag(flag_name)
+ assert not cfg.foo._get_node_flag(flag_name)
+
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"coveralls",
"sphinx"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
coverage==6.2
coveralls==3.3.1
docopt==0.6.2
docutils==0.18.1
idna==3.10
imagesize==1.4.1
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1631916693255/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
MarkupSafe==2.0.1
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
-e git+https://github.com/omry/omegaconf.git@db110e3b6effc337fb2dc11fe27b62f24b2f49c2#egg=omegaconf
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
Pygments==2.14.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytz==2025.2
PyYAML==6.0.1
requests==2.27.1
six==1.17.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: omegaconf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib-metadata=4.8.1=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- babel==2.11.0
- charset-normalizer==2.0.12
- coverage==6.2
- coveralls==3.3.1
- docopt==0.6.2
- docutils==0.18.1
- idna==3.10
- imagesize==1.4.1
- jinja2==3.0.3
- markupsafe==2.0.1
- pygments==2.14.0
- pytz==2025.2
- pyyaml==6.0.1
- requests==2.27.1
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- urllib3==1.26.20
prefix: /opt/conda/envs/omegaconf
| [
"tests/test_base_config.py::test_open_dict_restore[struct-open_dict]",
"tests/test_base_config.py::test_open_dict_restore[readonly-read_write]"
] | [] | [
"tests/test_base_config.py::test_set_value[input_0-foo-10-expected0]",
"tests/test_base_config.py::test_set_value[input_1-foo-value1-expected1]",
"tests/test_base_config.py::test_set_value[input_2-foo-value2-expected2]",
"tests/test_base_config.py::test_set_value[input_3-foo-value3-expected3]",
"tests/test_base_config.py::test_set_value[input_4-0-10-expected4]",
"tests/test_base_config.py::test_set_value[input_5-1-10-expected5]",
"tests/test_base_config.py::test_set_value[input_6-1-value6-expected6]",
"tests/test_base_config.py::test_set_value[input_7-1-value7-expected7]",
"tests/test_base_config.py::test_set_value[input_8-1-value8-expected8]",
"tests/test_base_config.py::test_set_value_validation_fail[input_0-foo-str]",
"tests/test_base_config.py::test_set_value_validation_fail[input_1-1-str]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_0]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_1]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_2]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_3]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_4]",
"tests/test_base_config.py::test_empty[input_0-True]",
"tests/test_base_config.py::test_empty[input_1-True]",
"tests/test_base_config.py::test_empty[input_2-False]",
"tests/test_base_config.py::test_empty[input_3-False]",
"tests/test_base_config.py::test_repr[input_0]",
"tests/test_base_config.py::test_repr[input_1]",
"tests/test_base_config.py::test_repr[input_2]",
"tests/test_base_config.py::test_repr[input_3]",
"tests/test_base_config.py::test_repr[input_4]",
"tests/test_base_config.py::test_repr[input_5]",
"tests/test_base_config.py::test_repr[input_6]",
"tests/test_base_config.py::test_str[input_0]",
"tests/test_base_config.py::test_str[input_1]",
"tests/test_base_config.py::test_str[input_2]",
"tests/test_base_config.py::test_str[input_3]",
"tests/test_base_config.py::test_str[input_4]",
"tests/test_base_config.py::test_str[input_5]",
"tests/test_base_config.py::test_str[input_6]",
"tests/test_base_config.py::test_flag_dict[readonly]",
"tests/test_base_config.py::test_flag_dict[struct]",
"tests/test_base_config.py::test_freeze_nested_dict[readonly]",
"tests/test_base_config.py::test_freeze_nested_dict[struct]",
"tests/test_base_config.py::test_deepcopy[src0]",
"tests/test_base_config.py::test_deepcopy[src1]",
"tests/test_base_config.py::test_deepcopy[src2]",
"tests/test_base_config.py::test_deepcopy[src3]",
"tests/test_base_config.py::test_deepcopy_readonly[src0]",
"tests/test_base_config.py::test_deepcopy_readonly[src1]",
"tests/test_base_config.py::test_deepcopy_readonly[src2]",
"tests/test_base_config.py::test_deepcopy_readonly[src3]",
"tests/test_base_config.py::test_deepcopy_struct[src0]",
"tests/test_base_config.py::test_deepcopy_struct[src1]",
"tests/test_base_config.py::test_deepcopy_struct[src2]",
"tests/test_base_config.py::test_deepcopy_struct[src3]",
"tests/test_base_config.py::test_deepcopy_after_del",
"tests/test_base_config.py::test_deepcopy_with_interpolation",
"tests/test_base_config.py::test_deepcopy_and_merge_and_flags",
"tests/test_base_config.py::test_flag_override[src0-struct-False-<lambda>-expectation0]",
"tests/test_base_config.py::test_flag_override[src1-readonly-False-<lambda>-expectation1]",
"tests/test_base_config.py::test_read_write_override[src0-<lambda>-expectation0]",
"tests/test_base_config.py::test_read_write_override[src1-<lambda>-expectation1]",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,cat-tokenized0]",
"tests/test_base_config.py::test_tokenize_with_escapes[dog\\\\,cat\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,",
"tests/test_base_config.py::test_tokenize_with_escapes[whitespace\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[None-tokenized8]",
"tests/test_base_config.py::test_tokenize_with_escapes[-tokenized9]",
"tests/test_base_config.py::test_tokenize_with_escapes[no",
"tests/test_base_config.py::test_struct_override[src0-<lambda>-expectation0]"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,745 | 406 | [
"omegaconf/config.py",
"omegaconf/omegaconf.py"
] |
joke2k__faker-1037 | 8941a9f149405d0f7fedc509a8d54b3aec50e4a0 | 2019-11-05 12:56:40 | 4b7a1898c3d86cafeca18d08ad612da475bdbe26 | fcurella: Thank you so much @malefice !
I think I'm OK with a higher memory usage as long as it's not crazy high and it's only if the user requests some IP address. In other words, I wouldn't compile the list on instantiation, but the first time it's needed.
malefice: Yeah, that is the plan. As for higher memory usage, it should be slightly higher at worst. We are talking about caching several lists with possibly a few hundred elements each, and they will only be generated during a cache miss. I just tested a rough implementation for private IPv4 networks, and generating 1000 such addresses from the same provider instance only takes 0.126s on my machine, and generating 100,000 only takes 1.4s. I will just need to polish/beautify it.
malefice: In the process of polishing my initial implementation to improve performance, I also found out that current IPV4 generation can be biased against IP addresses in larger subnets, leading to more recycled IP's from smaller subnets when generating in bulk. This is because the equal distribution is applied on the subnet level, regardless of how large each subnet is. To demonstrate, suppose that the resulting network list is the following:
- A - ip_network('10.0.0.0/8')
- B - ip_network('192.168.0.0/24')
- C - ip_network('192.168.1.0/24')
- D - ip_network('192.168.2.0/24')
Under the current code, each of the networks A, B, C, and D have a 25% chance of being selected, but each IP in network A only has a 1 in 16,777,216 chance of being selected if network A was already chosen. On the other hand, each IP in network B, C, and D has a 1 in 256 chance of being selected if the corresponding network was already chosen.
Each IP in network A then only has around a 1 in 67 million chance of being generated (25% of 1 in 16,777,216), but each IP in networks B, C, and D has around 1 in 1024 chance (25% of 1 in 256). This proves that the IP addresses generated are not equally distributed.
To fix this, I applied weights in the selection process, and the weights are simply the sizes of the subnets, and each IP regardless of subnet has a 1 in X chance of being selected, where X is the total number of addresses across all the subnets.
With this bias correction and the caching behavior, I can generate 1 million addresses using `ipv4()` in around 23 to 24 seconds on my machine. Comparing it to 1000 addresses in 17.8 seconds of the old implementation, this is a 740x to 750x performance improvement.
fcurella: NIce work @malefice ! ❤️
Could you also write some tests for the `except` clauses missed [here](https://coveralls.io/builds/26824112/source?filename=faker%2Fproviders%2Finternet%2F__init__.py#L360)?
| diff --git a/faker/build_docs.py b/faker/build_docs.py
index 3afc4714..17a96d3d 100644
--- a/faker/build_docs.py
+++ b/faker/build_docs.py
@@ -15,6 +15,12 @@ def write(fh, s):
return fh.write(s.encode('utf-8'))
+def write_base_provider(fh, doc, base_provider):
+ formatters = doc.get_provider_formatters(base_provider)
+ write(fh, ':github_url: hide\n\n')
+ write_provider(fh, doc, base_provider, formatters)
+
+
def write_provider(fh, doc, provider, formatters, excludes=None):
if excludes is None:
@@ -47,16 +53,21 @@ def write_provider(fh, doc, provider, formatters, excludes=None):
def write_docs(*args, **kwargs):
from faker import Faker, documentor
from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES
-
- fake = Faker(locale=DEFAULT_LOCALE)
-
from faker.providers import BaseProvider
- base_provider_formatters = [f for f in dir(BaseProvider)]
+ fake = Faker(locale=DEFAULT_LOCALE)
doc = documentor.Documentor(fake)
- formatters = doc.get_formatters(with_args=True, with_defaults=True)
+ # Write docs for fakers.providers.BaseProvider
+ base_provider = BaseProvider(fake)
+ fname = os.path.join(DOCS_ROOT, 'providers', 'BaseProvider.rst')
+ with open(fname, 'wb') as fh:
+ write_base_provider(fh, doc, base_provider)
+ # Write docs for default locale providers
+ base_provider_formatters = [f for f in dir(BaseProvider)]
+ formatters = doc.get_formatters(with_args=True, with_defaults=True,
+ excludes=base_provider_formatters)
for provider, fakers in formatters:
provider_name = doc.get_provider_name(provider)
fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)
@@ -64,15 +75,18 @@ def write_docs(*args, **kwargs):
write(fh, ':github_url: hide\n\n')
write_provider(fh, doc, provider, fakers)
+ # Write providers index page
with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:
write(fh, ':github_url: hide\n\n')
write(fh, 'Providers\n')
write(fh, '=========\n')
write(fh, '.. toctree::\n')
write(fh, ' :maxdepth: 2\n\n')
+ write(fh, ' providers/BaseProvider\n')
[write(fh, ' providers/%s\n' % doc.get_provider_name(provider))
for provider, fakers in formatters]
+ # Write docs for locale-specific providers
AVAILABLE_LOCALES = sorted(AVAILABLE_LOCALES)
for lang in AVAILABLE_LOCALES:
fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)
@@ -90,6 +104,7 @@ def write_docs(*args, **kwargs):
excludes=base_provider_formatters):
write_provider(fh, d, p, fs)
+ # Write locales index page
with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:
write(fh, ':github_url: hide\n\n')
write(fh, 'Locales\n')
diff --git a/faker/documentor.py b/faker/documentor.py
index 034c38f0..378104ae 100644
--- a/faker/documentor.py
+++ b/faker/documentor.py
@@ -22,7 +22,6 @@ class Documentor(object):
self.already_generated = []
def get_formatters(self, locale=None, excludes=None, **kwargs):
-
self.max_name_len = 0
self.already_generated = [] if excludes is None else excludes[:]
formatters = []
diff --git a/faker/providers/internet/__init__.py b/faker/providers/internet/__init__.py
index 91bc9f2f..d7775597 100644
--- a/faker/providers/internet/__init__.py
+++ b/faker/providers/internet/__init__.py
@@ -10,6 +10,7 @@ from ipaddress import ip_address, ip_network, IPV4LENGTH, IPV6LENGTH
# from faker.generator import random
# from faker.providers.lorem.la import Provider as Lorem
from faker.utils.decorators import lowercase, slugify, slugify_unicode
+from faker.utils.distribution import choices_distribution
localized = True
@@ -29,12 +30,6 @@ class _IPv4Constants:
'c': ip_network('192.0.0.0/3'),
}
- _linklocal_network = ip_network('169.254.0.0/16')
-
- _loopback_network = ip_network('127.0.0.0/8')
-
- _multicast_network = ip_network('224.0.0.0/4')
-
# Three common private networks from class A, B and CIDR
# to generate private addresses from.
_private_networks = [
@@ -49,8 +44,8 @@ class _IPv4Constants:
_excluded_networks = [
ip_network('0.0.0.0/8'),
ip_network('100.64.0.0/10'),
- ip_network('127.0.0.0/8'),
- ip_network('169.254.0.0/16'),
+ ip_network('127.0.0.0/8'), # loopback network
+ ip_network('169.254.0.0/16'), # linklocal network
ip_network('192.0.0.0/24'),
ip_network('192.0.2.0/24'),
ip_network('192.31.196.0/24'),
@@ -60,12 +55,9 @@ class _IPv4Constants:
ip_network('198.18.0.0/15'),
ip_network('198.51.100.0/24'),
ip_network('203.0.113.0/24'),
+ ip_network('224.0.0.0/4'), # multicast network
ip_network('240.0.0.0/4'),
ip_network('255.255.255.255/32'),
- ] + [
- _linklocal_network,
- _loopback_network,
- _multicast_network,
]
@@ -251,14 +243,123 @@ class Provider(BaseProvider):
return self.generator.parse(pattern)
- def _random_ipv4_address_from_subnet(self, subnet, network=False):
+ def _get_all_networks_and_weights(self, address_class=None):
+ """
+ Produces a 2-tuple of valid IPv4 networks and corresponding relative weights
+
+ :param address_class: IPv4 address class (a, b, or c)
+ """
+ # If `address_class` has an unexpected value, use the whole IPv4 pool
+ if address_class in _IPv4Constants._network_classes.keys():
+ networks_attr = '_cached_all_class_{}_networks'.format(address_class)
+ all_networks = [_IPv4Constants._network_classes[address_class]]
+ else:
+ networks_attr = '_cached_all_networks'
+ all_networks = [ip_network('0.0.0.0/0')]
+
+ # Return cached network and weight data if available
+ weights_attr = '{}_weights'.format(networks_attr)
+ if hasattr(self, networks_attr) and hasattr(self, weights_attr):
+ return getattr(self, networks_attr), getattr(self, weights_attr)
+
+ # Otherwise, compute for list of networks (excluding special networks)
+ all_networks = self._exclude_ipv4_networks(
+ all_networks,
+ _IPv4Constants._excluded_networks,
+ )
+
+ # Then compute for list of corresponding relative weights
+ weights = [network.num_addresses for network in all_networks]
+
+ # Then cache and return results
+ setattr(self, networks_attr, all_networks)
+ setattr(self, weights_attr, weights)
+ return all_networks, weights
+
+ def _get_private_networks_and_weights(self, address_class=None):
+ """
+ Produces an OrderedDict of valid private IPv4 networks and corresponding relative weights
+
+ :param address_class: IPv4 address class (a, b, or c)
+ """
+ # If `address_class` has an unexpected value, choose a valid value at random
+ if address_class not in _IPv4Constants._network_classes.keys():
+ address_class = self.ipv4_network_class()
+
+ # Return cached network and weight data if available for a specific address class
+ networks_attr = '_cached_private_class_{}_networks'.format(address_class)
+ weights_attr = '{}_weights'.format(networks_attr)
+ if hasattr(self, networks_attr) and hasattr(self, weights_attr):
+ return getattr(self, networks_attr), getattr(self, weights_attr)
+
+ # Otherwise, compute for list of private networks (excluding special networks)
+ supernet = _IPv4Constants._network_classes[address_class]
+ private_networks = [
+ subnet for subnet in _IPv4Constants._private_networks
+ if subnet.overlaps(supernet)
+ ]
+ private_networks = self._exclude_ipv4_networks(
+ private_networks,
+ _IPv4Constants._excluded_networks,
+ )
+
+ # Then compute for list of corresponding relative weights
+ weights = [network.num_addresses for network in private_networks]
+
+ # Then cache and return results
+ setattr(self, networks_attr, private_networks)
+ setattr(self, weights_attr, weights)
+ return private_networks, weights
+
+ def _get_public_networks_and_weights(self, address_class=None):
+ """
+ Produces a 2-tuple of valid public IPv4 networks and corresponding relative weights
+
+ :param address_class: IPv4 address class (a, b, or c)
+ """
+ # If `address_class` has an unexpected value, choose a valid value at random
+ if address_class not in _IPv4Constants._network_classes.keys():
+ address_class = self.ipv4_network_class()
+
+ # Return cached network and weight data if available for a specific address class
+ networks_attr = '_cached_public_class_{}_networks'.format(address_class)
+ weights_attr = '{}_weights'.format(networks_attr)
+ if hasattr(self, networks_attr) and hasattr(self, weights_attr):
+ return getattr(self, networks_attr), getattr(self, weights_attr)
+
+ # Otherwise, compute for list of public networks (excluding private and special networks)
+ public_networks = [_IPv4Constants._network_classes[address_class]]
+ public_networks = self._exclude_ipv4_networks(
+ public_networks,
+ _IPv4Constants._private_networks +
+ _IPv4Constants._excluded_networks,
+ )
+
+ # Then compute for list of corresponding relative weights
+ weights = [network.num_addresses for network in public_networks]
+
+ # Then cache and return results
+ setattr(self, networks_attr, public_networks)
+ setattr(self, weights_attr, weights)
+ return public_networks, weights
+
+ def _random_ipv4_address_from_subnets(self, subnets, weights=None, network=False):
"""
Produces a random IPv4 address or network with a valid CIDR
- from within a given subnet.
+ from within the given subnets using a distribution described
+ by weights.
- :param subnet: IPv4Network to choose from within
+ :param subnets: List of IPv4Networks to choose from within
+ :param weights: List of weights corresponding to the individual IPv4Networks
:param network: Return a network address, and not an IP address
+ :return:
"""
+ # If the weights argument has an invalid value, default to equal distribution
+ try:
+ subnet = choices_distribution(subnets, weights, random=self.generator.random, length=1)[0]
+ except (AssertionError, TypeError):
+ subnet = self.generator.random.choice(subnets)
+
address = str(
subnet[self.generator.random.randint(
0, subnet.num_addresses - 1,
@@ -283,6 +384,7 @@ class Provider(BaseProvider):
:param networks_to_exclude: List of IPv4 networks to exclude
:returns: Flat list of IPv4 networks
"""
+ networks_to_exclude.sort(key=lambda x: x.prefixlen)
for network_to_exclude in networks_to_exclude:
def _exclude_ipv4_network(network):
"""
@@ -327,7 +429,7 @@ class Provider(BaseProvider):
def ipv4(self, network=False, address_class=None, private=None):
"""
- Produce a random IPv4 address or network with a valid CIDR.
+ Returns a random IPv4 address or network with a valid CIDR.
:param network: Network address
:param address_class: IPv4 address class (a, b, or c)
@@ -340,25 +442,9 @@ class Provider(BaseProvider):
elif private is False:
return self.ipv4_public(address_class=address_class,
network=network)
-
- # if neither private nor public is required explicitly,
- # generate from whole requested address space
- if address_class:
- all_networks = [_IPv4Constants._network_classes[address_class]]
else:
- # if no address class is choosen, use whole IPv4 pool
- all_networks = [ip_network('0.0.0.0/0')]
-
- # exclude special networks
- all_networks = self._exclude_ipv4_networks(
- all_networks,
- _IPv4Constants._excluded_networks,
- )
-
- # choose random network from the list
- random_network = self.generator.random.choice(all_networks)
-
- return self._random_ipv4_address_from_subnet(random_network, network)
+ all_networks, weights = self._get_all_networks_and_weights(address_class=address_class)
+ return self._random_ipv4_address_from_subnets(all_networks, weights=weights, network=network)
def ipv4_private(self, network=False, address_class=None):
"""
@@ -368,26 +454,8 @@ class Provider(BaseProvider):
:param address_class: IPv4 address class (a, b, or c)
:returns: Private IPv4
"""
- # compute private networks from given class
- supernet = _IPv4Constants._network_classes[
- address_class or self.ipv4_network_class()
- ]
-
- private_networks = [
- subnet for subnet in _IPv4Constants._private_networks
- if subnet.overlaps(supernet)
- ]
-
- # exclude special networks
- private_networks = self._exclude_ipv4_networks(
- private_networks,
- _IPv4Constants._excluded_networks,
- )
-
- # choose random private network from the list
- private_network = self.generator.random.choice(private_networks)
-
- return self._random_ipv4_address_from_subnet(private_network, network)
+ private_networks, weights = self._get_private_networks_and_weights(address_class=address_class)
+ return self._random_ipv4_address_from_subnets(private_networks, weights=weights, network=network)
def ipv4_public(self, network=False, address_class=None):
"""
@@ -397,22 +465,8 @@ class Provider(BaseProvider):
:param address_class: IPv4 address class (a, b, or c)
:returns: Public IPv4
"""
- # compute public networks
- public_networks = [_IPv4Constants._network_classes[
- address_class or self.ipv4_network_class()
- ]]
-
- # exclude private and excluded special networks
- public_networks = self._exclude_ipv4_networks(
- public_networks,
- _IPv4Constants._private_networks +
- _IPv4Constants._excluded_networks,
- )
-
- # choose random public network from the list
- public_network = self.generator.random.choice(public_networks)
-
- return self._random_ipv4_address_from_subnet(public_network, network)
+ public_networks, weights = self._get_public_networks_and_weights(address_class=address_class)
+ return self._random_ipv4_address_from_subnets(public_networks, weights=weights, network=network)
def ipv6(self, network=False):
"""Produce a random IPv6 address or network with a valid CIDR"""
| ipv4 faker very slow
Hi! We updated from Faker 0.9.0 to 0.9.2 and noticed our tests speed decreased by 5X. We tracked the bug to the changes in the ipv4 faker done in commit 3b847f48bbfc33aed3e86cffcedbcfd1c0cccec5
### Steps to reproduce
1. checkout to commit 3b847f48bbfc33aed3e86cffcedbcfd1c0cccec5
1. run the following: `time python -c 'import faker; fake = faker.Faker(); [fake.ipv4() for _ in range(100)]'`
1. Check its output to see it was slow. in my laptop it took about 6 seconds.
1. Run `git checkout HEAD~1` to go to a working commit
1. Run the command of step 2 again and it will work much faster (in my case it took 0.4 seconds)
### Expected behavior
Generating ivp4 addresses should be fast
### Actual behavior
Generating ipv4 addresses is slow
| joke2k/faker | diff --git a/tests/test_factory.py b/tests/test_factory.py
index 110728cb..a87fc3bd 100644
--- a/tests/test_factory.py
+++ b/tests/test_factory.py
@@ -6,6 +6,10 @@ import re
import string
import sys
import unittest
+try:
+ from unittest.mock import patch, PropertyMock
+except ImportError:
+ from mock import patch, PropertyMock
from collections import OrderedDict
from ipaddress import ip_address, ip_network
@@ -526,6 +530,41 @@ class FactoryTestCase(unittest.TestCase):
email = factory.email()
assert '@' in email
+ def test_ipv4_caching(self):
+ from faker.providers.internet import Provider, _IPv4Constants
+
+ # The extra [None] here is to test code path involving whole IPv4 pool
+ for address_class in list(_IPv4Constants._network_classes.keys()) + [None]:
+ if address_class is None:
+ networks_attr = '_cached_all_networks'
+ else:
+ networks_attr = '_cached_all_class_{}_networks'.format(address_class)
+ weights_attr = '{}_weights'.format(networks_attr)
+ provider = Provider(self.generator)
+
+ # First, test cache creation
+ assert not hasattr(provider, networks_attr)
+ assert not hasattr(provider, weights_attr)
+ provider.ipv4(address_class=address_class)
+ assert hasattr(provider, networks_attr)
+ assert hasattr(provider, weights_attr)
+
+ # Then, test cache access on subsequent calls
+ with patch.object(Provider, networks_attr, create=True,
+ new_callable=PropertyMock) as mock_networks_cache:
+ with patch.object(Provider, weights_attr, create=True,
+ new_callable=PropertyMock) as mock_weights_cache:
+ # Keep test fast by patching the cache attributes to return something simple
+ mock_networks_cache.return_value = [ip_network('10.0.0.0/24')]
+ mock_weights_cache.return_value = [10]
+ for _ in range(100):
+ provider.ipv4(address_class=address_class)
+
+ # Python's hasattr() internally calls getattr()
+ # So each call to ipv4() accesses the cache attributes twice
+ assert mock_networks_cache.call_count == 200
+ assert mock_weights_cache.call_count == 200
+
def test_ipv4(self):
from faker.providers.internet import Provider
@@ -565,6 +604,37 @@ class FactoryTestCase(unittest.TestCase):
klass = provider.ipv4_network_class()
assert klass in 'abc'
+ def test_ipv4_private_caching(self):
+ from faker.providers.internet import Provider, _IPv4Constants
+
+ for address_class in _IPv4Constants._network_classes.keys():
+ networks_attr = '_cached_private_class_{}_networks'.format(address_class)
+ weights_attr = '{}_weights'.format(networks_attr)
+ provider = Provider(self.generator)
+
+ # First, test cache creation
+ assert not hasattr(provider, networks_attr)
+ assert not hasattr(provider, weights_attr)
+ provider.ipv4_private(address_class=address_class)
+ assert hasattr(provider, networks_attr)
+ assert hasattr(provider, weights_attr)
+
+ # Then, test cache access on subsequent calls
+ with patch.object(Provider, networks_attr, create=True,
+ new_callable=PropertyMock) as mock_networks_cache:
+ with patch.object(Provider, weights_attr, create=True,
+ new_callable=PropertyMock) as mock_weights_cache:
+ # Keep test fast by patching the cache attributes to return something simple
+ mock_networks_cache.return_value = [ip_network('10.0.0.0/24')]
+ mock_weights_cache.return_value = [10]
+ for _ in range(100):
+ provider.ipv4_private(address_class=address_class)
+
+ # Python's hasattr() internally calls getattr()
+ # So each call to ipv4_private() accesses the cache attributes twice
+ assert mock_networks_cache.call_count == 200
+ assert mock_weights_cache.call_count == 200
+
def test_ipv4_private(self):
from faker.providers.internet import Provider
provider = Provider(self.generator)
@@ -638,6 +708,37 @@ class FactoryTestCase(unittest.TestCase):
assert ip_address(address) >= class_min
assert ip_address(address) <= class_max
+ def test_ipv4_public_caching(self):
+ from faker.providers.internet import Provider, _IPv4Constants
+
+ for address_class in _IPv4Constants._network_classes.keys():
+ networks_attr = '_cached_public_class_{}_networks'.format(address_class)
+ weights_attr = '{}_weights'.format(networks_attr)
+ provider = Provider(self.generator)
+
+ # First, test cache creation
+ assert not hasattr(provider, networks_attr)
+ assert not hasattr(provider, weights_attr)
+ provider.ipv4_public(address_class=address_class)
+ assert hasattr(provider, networks_attr)
+ assert hasattr(provider, weights_attr)
+
+ # Then, test cache access on subsequent calls
+ with patch.object(Provider, networks_attr, create=True,
+ new_callable=PropertyMock) as mock_networks_cache:
+ with patch.object(Provider, weights_attr, create=True,
+ new_callable=PropertyMock) as mock_weights_cache:
+ # Keep test fast by patching the cache attributes to return something simple
+ mock_networks_cache.return_value = [ip_network('10.0.0.0/24')]
+ mock_weights_cache.return_value = [10]
+ for _ in range(100):
+ provider.ipv4_public(address_class=address_class)
+
+ # Python's hasattr() internally calls getattr()
+ # So each call to ipv4_public() accesses the cache attributes twice
+ assert mock_networks_cache.call_count == 200
+ assert mock_weights_cache.call_count == 200
+
def test_ipv4_public(self):
from faker.providers.internet import Provider
provider = Provider(self.generator)
@@ -697,6 +798,39 @@ class FactoryTestCase(unittest.TestCase):
assert len(address) <= 15
assert not ip_address(address).is_private, address
+ def test_ipv4_distribution_selection(self):
+ from faker.providers.internet import Provider
+ from faker.utils.distribution import choices_distribution
+ provider = Provider(self.generator)
+
+ subnets = [ip_network('10.0.0.0/8'), ip_network('11.0.0.0/8')]
+ valid_weights = [1, 1]
+ list_of_invalid_weights = [
+ [1, 2, 3], # List size does not match subnet list size
+ ['a', 'b'], # List size matches, but elements are invalid
+ None, # Not a list or valid iterable
+ ]
+
+ with patch('faker.providers.internet.choices_distribution',
+ wraps=choices_distribution) as mock_choices_fn:
+ with patch('faker.generator.random.choice',
+ wraps=random.choice) as mock_random_choice:
+ # If weights argument is valid, only `choices_distribution` should be called
+ provider._random_ipv4_address_from_subnets(subnets, valid_weights)
+ assert mock_choices_fn.call_count == 1
+ assert mock_random_choice.call_count == 0
+
+ # If weights argument is invalid, calls to `choices_distribution` will fail
+ # and calls to `random.choice` will be made as failover behavior
+ for invalid_weights in list_of_invalid_weights:
+ # Reset mock objects for each iteration
+ mock_random_choice.reset_mock()
+ mock_choices_fn.reset_mock()
+
+ provider._random_ipv4_address_from_subnets(subnets, invalid_weights)
+ assert mock_choices_fn.call_count == 1
+ assert mock_random_choice.call_count == 1
+
def test_ipv6(self):
from faker.providers.internet import Provider
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_git_commit_hash",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 3
},
"num_modified_files": 3
} | 2.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest pytest-cov pytest-xdist pytest-mock pytest-asyncio"
],
"pre_install": null,
"python": "3.9",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
exceptiongroup==1.2.2
execnet==2.1.1
-e git+https://github.com/joke2k/faker.git@8941a9f149405d0f7fedc509a8d54b3aec50e4a0#egg=Faker
iniconfig==2.1.0
packaging==24.2
pluggy==1.5.0
pytest==8.3.5
pytest-asyncio==0.26.0
pytest-cov==6.0.0
pytest-mock==3.14.0
pytest-xdist==3.6.1
python-dateutil==2.9.0.post0
six==1.17.0
text-unidecode==1.3
tomli==2.2.1
typing_extensions==4.13.0
| name: faker
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- exceptiongroup==1.2.2
- execnet==2.1.1
- iniconfig==2.1.0
- packaging==24.2
- pluggy==1.5.0
- pytest==8.3.5
- pytest-asyncio==0.26.0
- pytest-cov==6.0.0
- pytest-mock==3.14.0
- pytest-xdist==3.6.1
- python-dateutil==2.9.0.post0
- six==1.17.0
- text-unidecode==1.3
- tomli==2.2.1
- typing-extensions==4.13.0
prefix: /opt/conda/envs/faker
| [
"tests/test_factory.py::FactoryTestCase::test_ipv4_caching",
"tests/test_factory.py::FactoryTestCase::test_ipv4_distribution_selection",
"tests/test_factory.py::FactoryTestCase::test_ipv4_private_caching",
"tests/test_factory.py::FactoryTestCase::test_ipv4_public_caching"
] | [] | [
"tests/test_factory.py::FactoryTestCase::test_add_provider_gives_priority_to_newly_added_provider",
"tests/test_factory.py::FactoryTestCase::test_binary",
"tests/test_factory.py::FactoryTestCase::test_cli_seed",
"tests/test_factory.py::FactoryTestCase::test_cli_seed_with_repeat",
"tests/test_factory.py::FactoryTestCase::test_cli_verbosity",
"tests/test_factory.py::FactoryTestCase::test_command",
"tests/test_factory.py::FactoryTestCase::test_command_custom_provider",
"tests/test_factory.py::FactoryTestCase::test_documentor",
"tests/test_factory.py::FactoryTestCase::test_email",
"tests/test_factory.py::FactoryTestCase::test_ext_word_list",
"tests/test_factory.py::FactoryTestCase::test_format_calls_formatter_on_provider",
"tests/test_factory.py::FactoryTestCase::test_format_transfers_arguments_to_formatter",
"tests/test_factory.py::FactoryTestCase::test_get_formatter_returns_callable",
"tests/test_factory.py::FactoryTestCase::test_get_formatter_returns_correct_formatter",
"tests/test_factory.py::FactoryTestCase::test_get_formatter_throws_exception_on_incorrect_formatter",
"tests/test_factory.py::FactoryTestCase::test_instance_seed_chain",
"tests/test_factory.py::FactoryTestCase::test_invalid_locale",
"tests/test_factory.py::FactoryTestCase::test_ipv4",
"tests/test_factory.py::FactoryTestCase::test_ipv4_network_class",
"tests/test_factory.py::FactoryTestCase::test_ipv4_private",
"tests/test_factory.py::FactoryTestCase::test_ipv4_private_class_a",
"tests/test_factory.py::FactoryTestCase::test_ipv4_private_class_b",
"tests/test_factory.py::FactoryTestCase::test_ipv4_private_class_c",
"tests/test_factory.py::FactoryTestCase::test_ipv4_public",
"tests/test_factory.py::FactoryTestCase::test_ipv4_public_class_a",
"tests/test_factory.py::FactoryTestCase::test_ipv4_public_class_b",
"tests/test_factory.py::FactoryTestCase::test_ipv4_public_class_c",
"tests/test_factory.py::FactoryTestCase::test_ipv6",
"tests/test_factory.py::FactoryTestCase::test_language_code",
"tests/test_factory.py::FactoryTestCase::test_locale",
"tests/test_factory.py::FactoryTestCase::test_magic_call_calls_format",
"tests/test_factory.py::FactoryTestCase::test_magic_call_calls_format_with_arguments",
"tests/test_factory.py::FactoryTestCase::test_negative_pyfloat",
"tests/test_factory.py::FactoryTestCase::test_nl_BE_ssn_valid",
"tests/test_factory.py::FactoryTestCase::test_no_words",
"tests/test_factory.py::FactoryTestCase::test_no_words_paragraph",
"tests/test_factory.py::FactoryTestCase::test_no_words_sentence",
"tests/test_factory.py::FactoryTestCase::test_parse_returns_same_string_when_it_contains_no_curly_braces",
"tests/test_factory.py::FactoryTestCase::test_parse_returns_string_with_tokens_replaced_by_formatters",
"tests/test_factory.py::FactoryTestCase::test_password",
"tests/test_factory.py::FactoryTestCase::test_prefix_suffix_always_string",
"tests/test_factory.py::FactoryTestCase::test_pyfloat_in_range",
"tests/test_factory.py::FactoryTestCase::test_random_element",
"tests/test_factory.py::FactoryTestCase::test_random_number",
"tests/test_factory.py::FactoryTestCase::test_random_pyfloat",
"tests/test_factory.py::FactoryTestCase::test_random_pystr_characters",
"tests/test_factory.py::FactoryTestCase::test_random_sample_unique",
"tests/test_factory.py::FactoryTestCase::test_slugify",
"tests/test_factory.py::FactoryTestCase::test_some_words",
"tests/test_factory.py::FactoryTestCase::test_texts_chars_count",
"tests/test_factory.py::FactoryTestCase::test_texts_count",
"tests/test_factory.py::FactoryTestCase::test_texts_word_list",
"tests/test_factory.py::FactoryTestCase::test_unique_words",
"tests/test_factory.py::FactoryTestCase::test_us_ssn_valid",
"tests/test_factory.py::FactoryTestCase::test_words_ext_word_list",
"tests/test_factory.py::FactoryTestCase::test_words_ext_word_list_unique",
"tests/test_factory.py::FactoryTestCase::test_words_valueerror"
] | [] | MIT License | 5,751 | 3,876 | [
"faker/build_docs.py",
"faker/documentor.py",
"faker/providers/internet/__init__.py"
] |
iterative__dvc-2743 | 5e2fa3742b7b98327e131fd7d0025371b5322d4f | 2019-11-05 22:39:51 | 143cbdc828bbb90e764d657e8506c17564708a20 | diff --git a/dvc/ignore.py b/dvc/ignore.py
index fe78b649f..3d111e868 100644
--- a/dvc/ignore.py
+++ b/dvc/ignore.py
@@ -6,6 +6,7 @@ import os
from pathspec import PathSpec
from pathspec.patterns import GitWildMatchPattern
+from dvc.utils import dvc_walk
from dvc.utils import relpath
from dvc.utils.compat import open
@@ -47,6 +48,9 @@ class DvcIgnorePatterns(DvcIgnore):
return hash(self.ignore_file_path)
def __eq__(self, other):
+ if not isinstance(other, DvcIgnorePatterns):
+ return NotImplemented
+
return self.ignore_file_path == other.ignore_file_path
@@ -59,12 +63,21 @@ class DvcIgnoreDirs(DvcIgnore):
return dirs, files
+ def __hash__(self):
+ return hash(tuple(self.basenames))
+
+ def __eq__(self, other):
+ if not isinstance(other, DvcIgnoreDirs):
+ return NotImplemented
+
+ return self.basenames == other.basenames
+
class DvcIgnoreFilter(object):
def __init__(self, root_dir):
self.ignores = {DvcIgnoreDirs([".git", ".hg", ".dvc"])}
self._update(root_dir)
- for root, dirs, _ in os.walk(root_dir):
+ for root, dirs, _ in dvc_walk(root_dir, self):
for d in dirs:
self._update(os.path.join(root, d))
| dvc: .dvcignore trouble with nfs mounted directory
I have a large NFS mounted in a directory that I would like dvc to ignore.
Directory Structure:
```
directory
|___nfs
|___...
|___.dvc
|___.dvcignore
```
My *.dvcignore* has the following line:
`/nfs/` (I've tried `nfs/` and `nfs/*`)
The problem is that when I run `dvc status` or `dvc pull` the processes will just hang:
```
DEBUG: PRAGMA user_version;
DEBUG: fetched: [(3,)]
DEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)
DEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)
DEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)
DEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)
DEBUG: PRAGMA user_version = 3;
```
Here is the traceback from `KeyboardInterrupt`:
```
File "/home/ec2-user/app/proc/.env/lib/python3.7/site-packages/dvc/repo/__init__.py", line 499, in dvcignore
return DvcIgnoreFilter(self.root_dir)
File "/home/ec2-user/app/proc/.env/lib/python3.7/site-packages/dvc/ignore.py", line 67, in __init__
for root, dirs, _ in os.walk(root_dir):
File "/home/ec2-user/app/proc/.env/lib64/python3.7/os.py", line 410, in walk
yield from walk(new_path, topdown, onerror, followlinks)
File "/home/ec2-user/app/proc/.env/lib64/python3.7/os.py", line 368, in walk
is_dir = entry.is_dir()
```
Which makes me feel like the directory is not being ignored.
***Additonal***
I've unmounted the NFS directory and ran `dvc status` with no problem so I believe the issue stems from dvc trying to traverse it.
System Information:
```
DVC version: 0.66.6
Python version: 3.7.4
Platform: Linux 4.14.109-99.92.amzn2.x86_64
Installation: pip
```
| iterative/dvc | diff --git a/tests/func/test_ignore.py b/tests/func/test_ignore.py
index 2b5fec7e9..6e38c2b02 100644
--- a/tests/func/test_ignore.py
+++ b/tests/func/test_ignore.py
@@ -5,6 +5,9 @@ import pytest
from dvc.exceptions import DvcIgnoreInCollectedDirError
from dvc.ignore import DvcIgnore
+from dvc.ignore import DvcIgnoreDirs
+from dvc.ignore import DvcIgnoreFilter
+from dvc.ignore import DvcIgnorePatterns
from dvc.utils.compat import cast_bytes
from dvc.utils.fs import get_mtime_and_size
from tests.basic_env import TestDvc
@@ -131,3 +134,21 @@ def test_should_raise_on_dvcignore_in_out_dir(dvc_repo, repo_dir):
with pytest.raises(DvcIgnoreInCollectedDirError):
dvc_repo.add(repo_dir.DATA_DIR)
+
+
[email protected]("dname", [TestDvc.DATA_DIR, TestDvc.DATA_SUB_DIR])
+def test_ignore_collecting_dvcignores(repo_dir, dname):
+ top_ignore_file = os.path.join(
+ repo_dir.root_dir, os.path.dirname(dname), DvcIgnore.DVCIGNORE_FILE
+ )
+ repo_dir.create(top_ignore_file, os.path.basename(dname))
+
+ ignore_file = os.path.join(
+ repo_dir.root_dir, dname, DvcIgnore.DVCIGNORE_FILE
+ )
+ repo_dir.create(ignore_file, repo_dir.FOO)
+
+ assert DvcIgnoreFilter(repo_dir.root_dir).ignores == {
+ DvcIgnoreDirs([".git", ".hg", ".dvc"]),
+ DvcIgnorePatterns(top_ignore_file),
+ }
| {
"commit_name": "head_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 1
},
"num_modified_files": 1
} | 0.66 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aliyun-python-sdk-core==2.16.0
aliyun-python-sdk-core-v3==2.13.33
aliyun-python-sdk-kms==2.16.5
appdirs==1.4.4
asciimatics==1.14.0
atpublic==3.1.2
attrs @ file:///croot/attrs_1668696182826/work
autocommand==2.2.2
awscli==1.31.13
azure-common==1.1.28
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
bcrypt==4.2.1
boto==2.49.0
boto3==1.9.115
botocore==1.12.253
cachetools==4.2.4
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
collective.checkdocs==0.2
colorama==0.4.4
configobj==5.0.9
configparser==5.3.0
coverage==7.2.7
crcmod==1.7
cryptography==44.0.2
decorator==5.1.1
distro==1.9.0
docutils==0.15.2
-e git+https://github.com/iterative/dvc.git@5e2fa3742b7b98327e131fd7d0025371b5322d4f#egg=dvc
execnet==2.0.2
flake8==5.0.4
flake8-docstrings==1.7.0
flaky==3.8.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
flufl.lock==7.1.1
funcy==2.0
future==1.0.0
gitdb==4.0.12
gitdb2==4.0.2
GitPython==3.1.44
google-api-core==2.10.2
google-auth==1.35.0
google-cloud-core==1.7.3
google-cloud-storage==1.19.0
google-compute-engine==2.8.13
google-crc32c==1.5.0
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grandalf==0.6
humanize==4.6.0
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
inflect==6.0.5
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.2.3
jaraco.collections==4.2.0
jaraco.context==4.3.0
jaraco.functools==3.7.0
jaraco.structures==2.1.0
jaraco.text==3.11.1
jaraco.ui==2.3.0
jaraco.windows==5.7.0
Jinja2==3.1.6
jmespath==0.10.0
jsonpath-ng==1.7.0
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mock-ssh-server==0.9.1
more-itertools==9.1.0
moto==4.2.14
nanotime==0.5.2
networkx==2.3
numpy==1.21.6
oss2==2.6.1
packaging @ file:///croot/packaging_1671697413597/work
paramiko==3.5.1
path==16.6.0
pathspec==0.11.2
Pillow==9.5.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
ply==3.11
protobuf==4.24.4
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyarrow==0.14.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycodestyle==2.9.1
pycparser==2.21
pycryptodome==3.22.0
pydantic==1.10.21
pydocstyle==6.3.0
pyfiglet==0.8.post1
pyflakes==2.5.0
Pygments==2.17.2
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.1.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==6.0.1
rangehttpserver==1.4.0
requests==2.31.0
responses==0.23.3
rsa==4.7.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
s3transfer==0.2.1
schema==0.7.7
shortuuid==1.0.13
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
tqdm==4.67.1
treelib==1.7.1
types-PyYAML==6.0.12.12
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3==1.25.11
wcwidth==0.2.13
Werkzeug==2.2.3
xmltodict==0.14.2
zipp @ file:///croot/zipp_1672387121353/work
| name: dvc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- aliyun-python-sdk-core==2.16.0
- aliyun-python-sdk-core-v3==2.13.33
- aliyun-python-sdk-kms==2.16.5
- appdirs==1.4.4
- asciimatics==1.14.0
- atpublic==3.1.2
- autocommand==2.2.2
- awscli==1.31.13
- azure-common==1.1.28
- azure-storage-blob==2.1.0
- azure-storage-common==2.1.0
- bcrypt==4.2.1
- boto==2.49.0
- boto3==1.9.115
- botocore==1.12.253
- cachetools==4.2.4
- cffi==1.15.1
- charset-normalizer==3.4.1
- collective-checkdocs==0.2
- colorama==0.4.4
- configobj==5.0.9
- configparser==5.3.0
- coverage==7.2.7
- crcmod==1.7
- cryptography==44.0.2
- decorator==5.1.1
- distro==1.9.0
- docutils==0.15.2
- dvc==0.66.6+5e2fa3
- execnet==2.0.2
- flake8==5.0.4
- flake8-docstrings==1.7.0
- flaky==3.8.1
- flufl-lock==7.1.1
- funcy==2.0
- future==1.0.0
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==3.1.44
- google-api-core==2.10.2
- google-auth==1.35.0
- google-cloud-core==1.7.3
- google-cloud-storage==1.19.0
- google-compute-engine==2.8.13
- google-crc32c==1.5.0
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grandalf==0.6
- humanize==4.6.0
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- inflect==6.0.5
- jaraco-classes==3.2.3
- jaraco-collections==4.2.0
- jaraco-context==4.3.0
- jaraco-functools==3.7.0
- jaraco-structures==2.1.0
- jaraco-text==3.11.1
- jaraco-ui==2.3.0
- jaraco-windows==5.7.0
- jinja2==3.1.6
- jmespath==0.10.0
- jsonpath-ng==1.7.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mock-ssh-server==0.9.1
- more-itertools==9.1.0
- moto==4.2.14
- nanotime==0.5.2
- networkx==2.3
- numpy==1.21.6
- oss2==2.6.1
- paramiko==3.5.1
- path==16.6.0
- pathspec==0.11.2
- pillow==9.5.0
- ply==3.11
- protobuf==4.24.4
- psutil==7.0.0
- pyarrow==0.14.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycodestyle==2.9.1
- pycparser==2.21
- pycryptodome==3.22.0
- pydantic==1.10.21
- pydocstyle==6.3.0
- pyfiglet==0.8.post1
- pyflakes==2.5.0
- pygments==2.17.2
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.1
- rangehttpserver==1.4.0
- requests==2.31.0
- responses==0.23.3
- rsa==4.7.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- s3transfer==0.2.1
- schema==0.7.7
- shortuuid==1.0.13
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- tqdm==4.67.1
- treelib==1.7.1
- types-pyyaml==6.0.12.12
- urllib3==1.25.11
- wcwidth==0.2.13
- werkzeug==2.2.3
- xmltodict==0.14.2
prefix: /opt/conda/envs/dvc
| [
"tests/func/test_ignore.py::test_ignore_collecting_dvcignores[data_dir]",
"tests/func/test_ignore.py::test_ignore_collecting_dvcignores[data_dir/data_sub_dir]"
] | [
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir_unicode",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_parent_dir"
] | [] | [] | Apache License 2.0 | 5,756 | 374 | [
"dvc/ignore.py"
] |
|
omry__omegaconf-57 | db6f59569e541db940b8042e82b97b572138b031 | 2019-11-06 00:57:55 | db6f59569e541db940b8042e82b97b572138b031 | diff --git a/omegaconf/config.py b/omegaconf/config.py
index 6b53109..3c80734 100644
--- a/omegaconf/config.py
+++ b/omegaconf/config.py
@@ -244,12 +244,15 @@ class Config(object):
def merge_with_dotlist(self, dotlist):
for arg in dotlist:
- args = arg.split("=")
- key = args[0]
- value = None
- if len(args) > 1:
- # load with yaml to get correct automatic typing with the same rules as yaml parsing
- value = yaml.load(args[1], Loader=get_yaml_loader())
+ idx = arg.find("=")
+ if idx == -1:
+ key = arg
+ value = None
+ else:
+ key = arg[0:idx]
+ value = arg[idx + 1 :]
+ value = yaml.load(value, Loader=get_yaml_loader())
+
self.update(key, value)
def update(self, key, value=None):
| Handle dotlist override with values containing =
See https://github.com/facebookresearch/hydra/issues/266 (Which is actually one bug in Hydra and one in OmegaConf). | omry/omegaconf | diff --git a/tests/test_update.py b/tests/test_update.py
index 0ade637..12fcacf 100644
--- a/tests/test_update.py
+++ b/tests/test_update.py
@@ -1,5 +1,6 @@
import sys
+import pytest
from pytest import raises
from omegaconf import MissingMandatoryValue
@@ -154,10 +155,18 @@ def test_update_list_make_dict():
assert c[1].b.b == "bb"
-def test_merge_with_dotlist():
- c = OmegaConf.create([1, 2, 3])
- c.merge_with_dotlist(["0=bar", "2.a=100"])
- assert c == ["bar", 2, dict(a=100)]
[email protected](
+ "cfg,overrides,expected",
+ [
+ ([1, 2, 3], ["0=bar", "2.a=100"], ["bar", 2, dict(a=100)]),
+ ({}, ["foo=bar", "bar=100"], {"foo": "bar", "bar": 100}),
+ ({}, ["foo=bar=10"], {"foo": "bar=10"}),
+ ],
+)
+def test_merge_with_dotlist(cfg, overrides, expected):
+ c = OmegaConf.create(cfg)
+ c.merge_with_dotlist(overrides)
+ assert c == expected
def test_merge_with_cli():
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 3,
"test_score": 0
},
"num_modified_files": 1
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | argcomplete==1.12.3
attrs==22.2.0
black==22.8.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
cfgv==3.3.1
charset-normalizer==2.0.12
click==8.0.4
colorama==0.4.5
colorlog==6.9.0
coverage==6.2
coveralls==3.3.1
cryptography==40.0.2
dataclasses==0.8
distlib==0.3.9
docopt==0.6.2
docutils==0.18.1
execnet==1.9.0
filelock==3.4.1
flake8==5.0.4
identify==2.4.4
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.2.3
iniconfig==1.1.1
jeepney==0.7.1
keyring==23.4.1
mccabe==0.7.0
mypy-extensions==1.0.0
nodeenv==1.6.0
nox==2022.1.7
-e git+https://github.com/omry/omegaconf.git@db6f59569e541db940b8042e82b97b572138b031#egg=omegaconf
packaging==21.3
pathspec==0.9.0
pkginfo==1.10.0
platformdirs==2.4.0
pluggy==1.0.0
pre-commit==2.17.0
py==1.11.0
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytest-asyncio==0.16.0
pytest-cov==4.0.0
pytest-mock==3.6.1
pytest-xdist==3.0.2
PyYAML==6.0.1
readme-renderer==34.0
requests==2.27.1
requests-toolbelt==1.0.0
rfc3986==1.5.0
SecretStorage==3.3.3
six==1.17.0
toml==0.10.2
tomli==1.2.3
tqdm==4.64.1
twine==3.8.0
typed-ast==1.5.5
typing_extensions==4.1.1
urllib3==1.26.20
virtualenv==20.16.2
webencodings==0.5.1
zipp==3.6.0
| name: omegaconf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- argcomplete==1.12.3
- attrs==22.2.0
- black==22.8.0
- bleach==4.1.0
- cffi==1.15.1
- cfgv==3.3.1
- charset-normalizer==2.0.12
- click==8.0.4
- colorama==0.4.5
- colorlog==6.9.0
- coverage==6.2
- coveralls==3.3.1
- cryptography==40.0.2
- dataclasses==0.8
- distlib==0.3.9
- docopt==0.6.2
- docutils==0.18.1
- execnet==1.9.0
- filelock==3.4.1
- flake8==5.0.4
- identify==2.4.4
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.2.3
- iniconfig==1.1.1
- jeepney==0.7.1
- keyring==23.4.1
- mccabe==0.7.0
- mypy-extensions==1.0.0
- nodeenv==1.6.0
- nox==2022.1.7
- packaging==21.3
- pathspec==0.9.0
- pkginfo==1.10.0
- platformdirs==2.4.0
- pluggy==1.0.0
- pre-commit==2.17.0
- py==1.11.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytest-asyncio==0.16.0
- pytest-cov==4.0.0
- pytest-mock==3.6.1
- pytest-xdist==3.0.2
- pyyaml==6.0.1
- readme-renderer==34.0
- requests==2.27.1
- requests-toolbelt==1.0.0
- rfc3986==1.5.0
- secretstorage==3.3.3
- six==1.17.0
- toml==0.10.2
- tomli==1.2.3
- tqdm==4.64.1
- twine==3.8.0
- typed-ast==1.5.5
- typing-extensions==4.1.1
- urllib3==1.26.20
- virtualenv==20.16.2
- webencodings==0.5.1
- zipp==3.6.0
prefix: /opt/conda/envs/omegaconf
| [
"tests/test_update.py::test_merge_with_dotlist[cfg2-overrides2-expected2]"
] | [] | [
"tests/test_update.py::test_update_map_value",
"tests/test_update.py::test_update_map_new_keyvalue",
"tests/test_update.py::test_update_map_to_value",
"tests/test_update.py::test_update_with_empty_map_value",
"tests/test_update.py::test_update_with_map_value",
"tests/test_update.py::test_update_deep_from_empty",
"tests/test_update.py::test_update_deep_with_map",
"tests/test_update.py::test_update_deep_with_value",
"tests/test_update.py::test_update_deep_with_map2",
"tests/test_update.py::test_update_deep_with_map_update",
"tests/test_update.py::test_list_value_update",
"tests/test_update.py::test_override_mandatory_value",
"tests/test_update.py::test_update_empty_to_value",
"tests/test_update.py::test_update_same_value",
"tests/test_update.py::test_update_value_to_map",
"tests/test_update.py::test_update_map_empty_to_map",
"tests/test_update.py::test_update_list",
"tests/test_update.py::test_update_nested_list",
"tests/test_update.py::test_update_list_make_dict",
"tests/test_update.py::test_merge_with_dotlist[cfg0-overrides0-expected0]",
"tests/test_update.py::test_merge_with_dotlist[cfg1-overrides1-expected1]",
"tests/test_update.py::test_merge_with_cli"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,758 | 246 | [
"omegaconf/config.py"
] |
|
google__budou-105 | 0bb774c1b0bf79aebb5b60917811157c49db6827 | 2019-11-06 06:35:08 | 0bb774c1b0bf79aebb5b60917811157c49db6827 | diff --git a/budou/budou.py b/budou/budou.py
index 7576020..de672b4 100644
--- a/budou/budou.py
+++ b/budou/budou.py
@@ -17,7 +17,7 @@
"""Budou: an automatic organizer tool for beautiful line breaking in CJK
Usage:
- budou [--segmenter=<seg>] [--language=<lang>] [--classname=<class>] [--inlinestyle] <source>
+ budou [--segmenter=<seg>] [--language=<lang>] [--classname=<class>] [--inlinestyle] [--wbr] <source>
budou -h | --help
budou -v | --version
@@ -36,6 +36,9 @@ Options:
--inlinestyle Add :code:`display:inline-block` as inline style
attribute.
+
+ --wbr User WBR tag for serialization instead of
+ inline-block SPAN tags.
"""
from __future__ import print_function
@@ -62,12 +65,14 @@ def main():
language=args['--language'],
classname=args['--classname'],
inlinestyle=args['--inlinestyle'],
+ wbr=args['--wbr'],
)
print(result['html_code'])
sys.exit()
def parse(source, segmenter='nlapi', language=None, max_length=None,
- classname=None, attributes=None, inlinestyle=False, **kwargs):
+ classname=None, attributes=None, inlinestyle=False, wbr=False,
+ **kwargs):
"""Parses input source.
Args:
@@ -79,6 +84,7 @@ def parse(source, segmenter='nlapi', language=None, max_length=None,
attributes (dict, optional): Attributes for output SPAN tags.
inlinestyle (bool, optional): Add :code:`display:inline-block` as inline
style attribute.
+ wbr (bool, optional): User WBR tag for serialization.
Returns:
Results in a dict. :code:`chunks` holds a list of chunks
@@ -88,7 +94,7 @@ def parse(source, segmenter='nlapi', language=None, max_length=None,
parser = get_parser(segmenter, **kwargs)
return parser.parse(
source, language=language, max_length=max_length, classname=classname,
- attributes=attributes, inlinestyle=inlinestyle)
+ attributes=attributes, inlinestyle=inlinestyle, wbr=wbr)
def authenticate(json_path=None):
"""Gets a Natural Language API parser by authenticating the API.
diff --git a/budou/chunk.py b/budou/chunk.py
index 72c1ff3..cf1f909 100644
--- a/budou/chunk.py
+++ b/budou/chunk.py
@@ -21,6 +21,10 @@ import collections
from xml.etree import ElementTree as ET
import unicodedata
import html5lib
+from html5lib import getTreeWalker
+from html5lib.filters import sanitizer
+from html5lib.constants import namespaces
+
class Chunk:
"""A unit for word segmentation.
@@ -271,7 +275,24 @@ class ChunkList(collections.MutableSequence):
target_chunks.append(chunk)
self.list = target_chunks
- def html_serialize(self, attributes, max_length=None):
+ def html_serialize(self, attributes, max_length=None, use_wbr=False):
+ """Returns concatenated HTML code with SPAN tag.
+
+ Args:
+ attributes (dict): A map of name-value pairs for attributes of output
+ SPAN tags.
+ max_length (int, optional): Maximum length of span enclosed chunk.
+ use_wbr (bool, optional): Use WBR tag to serialize the output.
+
+ Returns:
+ The organized HTML code. (str)
+ """
+ if use_wbr:
+ return self.wbr_serialize(max_length)
+ else:
+ return self.span_serialize(attributes, max_length)
+
+ def span_serialize(self, attributes, max_length=None):
"""Returns concatenated HTML code with SPAN tag.
Args:
@@ -309,3 +330,46 @@ class ChunkList(collections.MutableSequence):
html5lib.parseFragment(result), sanitize=True,
quote_attr_values='always')
return result
+
+ def wbr_serialize(self):
+ """Returns concatenated HTML code with WBR tag. This is still experimental.
+
+ Returns:
+ The organized HTML code. (str)
+ """
+ doc = ET.Element('span')
+ doc.attrib['style'] = 'word-break: keep-all'
+ for chunk in self:
+ if (chunk.has_cjk() and doc.text):
+ ele = ET.Element('wbr')
+ doc.append(ele)
+ doc.getchildren()[-1].tail = chunk.word
+ else:
+ # add word without span tag for non-CJK text (e.g. English)
+ # by appending it after the last element
+ if doc.getchildren():
+ if doc.getchildren()[-1].tail is None:
+ doc.getchildren()[-1].tail = chunk.word
+ else:
+ doc.getchildren()[-1].tail += chunk.word
+ else:
+ if doc.text is None:
+ doc.text = chunk.word
+ else:
+ doc.text += chunk.word
+ content = ET.tostring(doc, encoding='utf-8').decode('utf-8')
+ dom = html5lib.parseFragment(content)
+ treewalker = getTreeWalker('etree')
+ stream = treewalker(dom)
+ serializer = html5lib.serializer.HTMLSerializer(
+ quote_attr_values='always')
+ allowed_elements = set(sanitizer.allowed_elements)
+ allowed_elements.add((namespaces['html'], 'wbr'))
+ allowed_css_properties = set(sanitizer.allowed_css_properties)
+ allowed_css_properties.add('word-break')
+ result = serializer.render(sanitizer.Filter(
+ stream, allowed_elements=allowed_elements,
+ allowed_css_properties=allowed_css_properties,
+ ))
+ return result
+
diff --git a/budou/nlapisegmenter.py b/budou/nlapisegmenter.py
index de0bd78..93c98bf 100644
--- a/budou/nlapisegmenter.py
+++ b/budou/nlapisegmenter.py
@@ -171,7 +171,7 @@ class NLAPISegmenter(Segmenter):
ValueError: If :code:`language` is given and it is not included in
:code:`supported_languages`.
"""
- if language and not language in self.supported_languages:
+ if language and language not in self.supported_languages:
raise ValueError(
'Language {} is not supported by NLAPI segmenter'.format(language))
diff --git a/budou/parser.py b/budou/parser.py
index a8fa147..753f372 100644
--- a/budou/parser.py
+++ b/budou/parser.py
@@ -57,7 +57,7 @@ class Parser:
self.segmenter = None
def parse(self, source, language=None, classname=None, max_length=None,
- attributes=None, inlinestyle=False):
+ attributes=None, inlinestyle=False, wbr=False):
"""Parses the source sentence to output organized HTML code.
Args:
@@ -67,6 +67,7 @@ class Parser:
attributes (dict, optional): Attributes for output SPAN tags.
inlinestyle (bool, optional): Add :code:`display:inline-block` as inline
style attribute.
+ wbr (bool, optional): User WBR tag for serialization.
Returns:
A dictionary containing :code:`chunks` (:obj:`budou.chunk.ChunkList`)
@@ -75,7 +76,8 @@ class Parser:
attributes = parse_attributes(attributes, classname, inlinestyle)
source = preprocess(source)
chunks = self.segmenter.segment(source, language)
- html_code = chunks.html_serialize(attributes, max_length=max_length)
+ html_code = chunks.html_serialize(
+ attributes, max_length=max_length, use_wbr=wbr)
return {
'chunks': chunks,
'html_code': html_code,
diff --git a/poc.py b/poc.py
new file mode 100644
index 0000000..a167f67
--- /dev/null
+++ b/poc.py
@@ -0,0 +1,11 @@
+import html5lib
+from html5lib import getTreeWalker
+from html5lib.filters import sanitizer
+from html5lib.constants import namespaces, prefixes
+content = '<span>This is <wbr> not a cat</span>'
+dom = html5lib.parseFragment(content)
+treewalker = getTreeWalker('etree')
+serializer = html5lib.serializer.HTMLSerializer()
+stream = treewalker(dom)
+output = serializer.render(sanitizer.Filter(stream, allowed_elements=set([(namespaces['html'], 'span')])))
+print(output)
| Span, Zero-width space, or wbr elements?
I just learned this nice work. It is particularly useful for dyslexia people.
In a meeting of the Japanese DAISY project for textbooks, we discussed how hints for line breaking should be represented. The use of span elements was suggested. But people do not want to use span elements for this purpose, because DAISY textbooks already too heavily use span elements for multi-media synchronization. Thus, Keio Advanced Publishing Laboratory is inclined to adopt the zero-width space or wbr elements. [Florian's personal draft](https://specs.rivoal.net/css-space-expansion/) is based on this assumption. See https://github.com/w3c/jlreq/issues/17
| google/budou | diff --git a/tests/test_chunk.py b/tests/test_chunk.py
index 0163f9a..5754101 100644
--- a/tests/test_chunk.py
+++ b/tests/test_chunk.py
@@ -114,7 +114,7 @@ class TestChunkList(unittest.TestCase):
[u'これが', '\n', 'Android'], [chunk.word for chunk in chunks],
'Trailing spaces in CJK chunk should be converted to breaklines.')
- def test_html_serialize(self):
+ def test_span_serialize(self):
chunks = ChunkList(Chunk('Hello'), Chunk.space(), Chunk(u'今天'),
Chunk(u'天气'), Chunk(u'很好'))
attributes = {
@@ -127,7 +127,7 @@ class TestChunkList(unittest.TestCase):
u'<span class="foo">天气</span>'
u'<span class="foo">很好</span>'
'</span>')
- result = chunks.html_serialize(attributes, None)
+ result = chunks.span_serialize(attributes, None)
self.assertEqual(
result, expected,
'The chunks should be compiled to a HTML code.')
@@ -140,7 +140,7 @@ class TestChunkList(unittest.TestCase):
expected = ('<span>'
'Hey<<script>alert(1)</script>>guys'
'</span>')
- result = chunks.html_serialize(attributes, None)
+ result = chunks.span_serialize(attributes, None)
self.assertEqual(
result, expected, 'HTML tags included in a chunk should be encoded.')
@@ -155,7 +155,7 @@ class TestChunkList(unittest.TestCase):
u'インフルエンザに'
u'<span class="foo">かかった。</span>'
'</span>')
- result = chunks.html_serialize(attributes, 6)
+ result = chunks.span_serialize(attributes, 6)
self.assertEqual(
result, expected,
'Chunks that exceed the max length should not be enclosed by a span.')
@@ -163,5 +163,15 @@ class TestChunkList(unittest.TestCase):
# TODO (tushuhei) Check if TypeError is raised when any instance but Chunk
# is given to the list.
+ def test_wbr_serialize(self):
+ chunks = ChunkList(Chunk(u'今日は'), Chunk(u'ご飯を'), Chunk(u'食べます。'))
+ result = chunks.wbr_serialize()
+ expected = (
+ '<span style="word-break: keep-all;">'
+ u'今日は<wbr></wbr>ご飯を<wbr></wbr>食べます。'
+ '</span>')
+ self.assertEqual(
+ result, expected, 'Chunks should be separated by WBR tags.')
+
if __name__ == '__main__':
unittest.main()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_added_files",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 4
} | 0.9 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"sphinx",
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
-e git+https://github.com/google/budou.git@0bb774c1b0bf79aebb5b60917811157c49db6827#egg=budou
cachetools==4.2.4
certifi==2021.5.30
chardet==5.0.0
charset-normalizer==2.0.12
docopt==0.6.2
docutils==0.18.1
future==1.0.0
google-api-core==2.8.2
google-api-python-client==2.52.0
google-auth==2.22.0
google-auth-httplib2==0.2.0
googleapis-common-protos==1.56.3
html5lib==1.0.1
httplib2==0.22.0
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
packaging==21.3
pluggy==1.0.0
protobuf==3.19.6
py==1.11.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
Pygments==2.14.0
pyparsing==3.1.4
pytest==7.0.1
pytz==2025.2
requests==2.27.1
rsa==4.9
six==1.17.0
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tinysegmenter3==0.1.0
tomli==1.2.3
typing_extensions==4.1.1
uritemplate==4.1.1
urllib3==1.26.20
webencodings==0.5.1
zipp==3.6.0
| name: budou
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- cachetools==4.2.4
- chardet==5.0.0
- charset-normalizer==2.0.12
- docopt==0.6.2
- docutils==0.18.1
- future==1.0.0
- google-api-core==2.8.2
- google-api-python-client==2.52.0
- google-auth==2.22.0
- google-auth-httplib2==0.2.0
- googleapis-common-protos==1.56.3
- html5lib==1.0.1
- httplib2==0.22.0
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- packaging==21.3
- pluggy==1.0.0
- protobuf==3.19.6
- py==1.11.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytz==2025.2
- requests==2.27.1
- rsa==4.9
- six==1.17.0
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tinysegmenter3==0.1.0
- tomli==1.2.3
- typing-extensions==4.1.1
- uritemplate==4.1.1
- urllib3==1.26.20
- webencodings==0.5.1
- zipp==3.6.0
prefix: /opt/conda/envs/budou
| [
"tests/test_chunk.py::TestChunkList::test_span_serialize",
"tests/test_chunk.py::TestChunkList::test_wbr_serialize"
] | [] | [
"tests/test_chunk.py::TestChunk::test_has_cjk",
"tests/test_chunk.py::TestChunk::test_is_open_punct",
"tests/test_chunk.py::TestChunk::test_is_punt",
"tests/test_chunk.py::TestChunkList::test_concatenate_inner",
"tests/test_chunk.py::TestChunkList::test_get_overlaps",
"tests/test_chunk.py::TestChunkList::test_insert_breaklines",
"tests/test_chunk.py::TestChunkList::test_swap"
] | [] | Apache License 2.0 | 5,759 | 2,076 | [
"budou/budou.py",
"budou/chunk.py",
"budou/nlapisegmenter.py",
"budou/parser.py"
] |
|
NeuralEnsemble__python-neo-764 | 03bef5fe3f654df24e8f66ee3d07fc6aabef6479 | 2019-11-07 07:46:24 | e7aeb27b8716a224e62f8e7e5848546bdb23b735 | diff --git a/neo/core/spiketrain.py b/neo/core/spiketrain.py
index d863abb9..20a87594 100644
--- a/neo/core/spiketrain.py
+++ b/neo/core/spiketrain.py
@@ -38,10 +38,13 @@ def check_has_dimensions_time(*values):
'''
errmsgs = []
for value in values:
- dim = value.dimensionality
- if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0],
- pq.UnitTime)):
- errmsgs.append("value {} has dimensions {}, not [time]".format(value, dim.simplified))
+ dim = value.dimensionality.simplified
+ if (len(dim) != 1 or
+ list(dim.values())[0] != 1 or not
+ isinstance(list(dim.keys())[0], pq.UnitTime)):
+ errmsgs.append(
+ "value {} has dimensions {}, not [time]".format(
+ value, dim))
if errmsgs:
raise ValueError("\n".join(errmsgs))
| Regression in the `check_has_dimensions_time` method of the SpikeTrain class for Compound Units
Regression in the `check_has_dimensions_time` method of the SpikeTrain class.
Calling
`neo.SpikeTrain([1,2,3]*pq.CompoundUnit("1/10*s"), t_start=0*pq.ms, t_stop=10000*pq.s)-5*pq.CompoundUnit("1/100*s")`
used to issue
`ValueError: value 5.0 has dimensions s, not [time]`
due to an incorrect handling of CompoundUnits -- these need to be simplified first in order to check consistency.
This fixes the issue by an additional simplified statement, making the above statement return
`<SpikeTrain(array([0.5, 1.5, 2.5]) * (1/10*s), [-0.49999999999999994 (1/10*s), 99999.5 (1/10*s)])>`
| NeuralEnsemble/python-neo | diff --git a/neo/test/coretest/test_spiketrain.py b/neo/test/coretest/test_spiketrain.py
index b089ea24..325f9bc3 100644
--- a/neo/test/coretest/test_spiketrain.py
+++ b/neo/test/coretest/test_spiketrain.py
@@ -143,6 +143,12 @@ class Testcheck_has_dimensions_time(unittest.TestCase):
check_has_dimensions_time(d)
self.assertRaises(ValueError, check_has_dimensions_time, a, b, c, d)
+ # Regression test for #763
+ # This test ensures the function works for compound units
+ def test__check_has_dimensions_time_compound_unit(self):
+ a = np.arange(3) * pq.CompoundUnit("1/10*s")
+ check_has_dimensions_time(a)
+
class Testcheck_time_in_range(unittest.TestCase):
def test__check_time_in_range_empty_array(self):
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 1
} | 0.8 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
certifi==2021.5.30
importlib-metadata==4.8.3
iniconfig==1.1.1
-e git+https://github.com/NeuralEnsemble/python-neo.git@03bef5fe3f654df24e8f66ee3d07fc6aabef6479#egg=neo
nose==1.3.7
numpy==1.19.5
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.1.4
pytest==7.0.1
quantities==0.13.0
tomli==1.2.3
typing_extensions==4.1.1
zipp==3.6.0
| name: python-neo
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- nose==1.3.7
- numpy==1.19.5
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pyparsing==3.1.4
- pytest==7.0.1
- quantities==0.13.0
- tomli==1.2.3
- typing-extensions==4.1.1
- zipp==3.6.0
prefix: /opt/conda/envs/python-neo
| [
"neo/test/coretest/test_spiketrain.py::Testcheck_has_dimensions_time::test__check_has_dimensions_time_compound_unit"
] | [] | [
"neo/test/coretest/test_spiketrain.py::Test__generate_datasets::test__fake_neo__cascade",
"neo/test/coretest/test_spiketrain.py::Test__generate_datasets::test__fake_neo__nocascade",
"neo/test/coretest/test_spiketrain.py::Test__generate_datasets::test__get_fake_values",
"neo/test/coretest/test_spiketrain.py::Testcheck_has_dimensions_time::test__check_has_dimensions_time",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_above",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_above_below",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_above_below_scale",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_above_scale",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_below",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_below_scale",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_empty_array",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_empty_array_invalid_t_stop",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_exact",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_inside",
"neo/test/coretest/test_spiketrain.py::Testcheck_time_in_range::test__check_time_in_range_scale",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_empty",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_empty_no_t_start",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_no_start_stop_units",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_no_start_stop_units_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_no_start_stop_units_with_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_with_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_with_incompatible_units_ValueError",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_array_without_units_should_raise_ValueError",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_list",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_list_no_start_stop_units",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_list_no_start_stop_units_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_list_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_list_without_units_should_raise_ValueError",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_no_start_stop_units",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_no_start_stop_units_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_no_start_stop_units_with_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_units",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_units_no_start_stop_units",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_units_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_units_with_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_array_with_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_from_quantity_units_no_start_stop_units_set_dtype",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_minimal",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_with_len_times_different_size_than_waveform_shape1_ValueError",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test__create_with_times_outside_tstart_tstop_ValueError",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test_default_tstart",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test_defaults",
"neo/test/coretest/test_spiketrain.py::TestConstructor::test_tstop_units_conversion",
"neo/test/coretest/test_spiketrain.py::TestSorting::test_sort",
"neo/test/coretest/test_spiketrain.py::TestSlice::test_compliant",
"neo/test/coretest/test_spiketrain.py::TestSlice::test_slice",
"neo/test/coretest/test_spiketrain.py::TestSlice::test_slice_from_beginning",
"neo/test/coretest/test_spiketrain.py::TestSlice::test_slice_negative_idxs",
"neo/test/coretest/test_spiketrain.py::TestSlice::test_slice_to_end",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test__deepcopy_should_set_parents_objects_to_None",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test__time_slice_deepcopy_annotations",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test__time_slice_deepcopy_array_annotations",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test__time_slice_deepcopy_data",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test__time_slice_should_set_parents_to_None",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_compliant",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_differnt_units",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_empty",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_matching_ends",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_none_both",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_none_start",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_none_stop",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_out_of_boundries",
"neo/test/coretest/test_spiketrain.py::TestTimeSlice::test_time_slice_typical",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_by_zero",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_different_units",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_same_annotations",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_same_array_annotations",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_same_attributes",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_same_units",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test__time_shift_should_set_parents_to_None",
"neo/test/coretest/test_spiketrain.py::TestTimeShift::test_compliant",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_compliant",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_correct_shape",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_correct_times",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_incompatible_t_start",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_merge_typical",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_merge_with_waveforms",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_missing_waveforms_error",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_neo_relations",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_rescaling_units",
"neo/test/coretest/test_spiketrain.py::TestMerge::test_sampling_rate",
"neo/test/coretest/test_spiketrain.py::TestDuplicateWithNewData::test_deep_copy_attributes",
"neo/test/coretest/test_spiketrain.py::TestDuplicateWithNewData::test_duplicate_with_new_data",
"neo/test/coretest/test_spiketrain.py::TestAttributesAnnotations::test_annotations",
"neo/test/coretest/test_spiketrain.py::TestAttributesAnnotations::test_array_annotations",
"neo/test/coretest/test_spiketrain.py::TestAttributesAnnotations::test_autoset_universally_recommended_attributes",
"neo/test/coretest/test_spiketrain.py::TestAttributesAnnotations::test_set_universally_recommended_attributes",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__adding_time_array",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__adding_time_scalar",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__adding_two_spike_trains",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__changing_multiple_spiketimes",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__changing_multiple_spiketimes_should_check_time_in_range",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__changing_spiketime_should_check_time_in_range",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__rescale",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__rescale_incompatible_units_ValueError",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__rescale_same_units",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__subtracting_time_array",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__subtracting_time_scalar",
"neo/test/coretest/test_spiketrain.py::TestChanging::test__subtracting_two_spike_trains",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_default",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_default_and_data_not_quantity",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_false",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_false_and_data_not_quantity",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_false_and_dtype_change",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_false_and_fake_rescale",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_false_and_rescale_true",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_true",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_change_with_copy_true_and_data_not_quantity",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_changing_slice_changes_original_spiketrain",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_changing_slice_changes_original_spiketrain_with_copy_false",
"neo/test/coretest/test_spiketrain.py::TestChanging::test_init_with_rescale",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__children",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__compliant",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__duration",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__repr",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__right_sweep",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__sampling_period",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__spike_duration",
"neo/test/coretest/test_spiketrain.py::TestPropertiesMethods::test__times",
"neo/test/coretest/test_spiketrain.py::TestMiscellaneous::test__different_dtype_for_t_start_and_array",
"neo/test/coretest/test_spiketrain.py::TestMiscellaneous::test_as_array",
"neo/test/coretest/test_spiketrain.py::TestMiscellaneous::test_as_quantity"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,764 | 257 | [
"neo/core/spiketrain.py"
] |
|
iterative__dvc-2751 | c60b82d31d91fe29b661e144c70548a4f18d30d9 | 2019-11-07 14:00:26 | 143cbdc828bbb90e764d657e8506c17564708a20 | diff --git a/dvc/ignore.py b/dvc/ignore.py
index 3d111e868..fbfcff3ac 100644
--- a/dvc/ignore.py
+++ b/dvc/ignore.py
@@ -6,9 +6,7 @@ import os
from pathspec import PathSpec
from pathspec.patterns import GitWildMatchPattern
-from dvc.utils import dvc_walk
from dvc.utils import relpath
-from dvc.utils.compat import open
logger = logging.getLogger(__name__)
@@ -21,13 +19,13 @@ class DvcIgnore(object):
class DvcIgnorePatterns(DvcIgnore):
- def __init__(self, ignore_file_path):
+ def __init__(self, ignore_file_path, tree):
assert os.path.isabs(ignore_file_path)
self.ignore_file_path = ignore_file_path
self.dirname = os.path.normpath(os.path.dirname(ignore_file_path))
- with open(ignore_file_path, encoding="utf-8") as fobj:
+ with tree.open(ignore_file_path, encoding="utf-8") as fobj:
self.ignore_spec = PathSpec.from_lines(GitWildMatchPattern, fobj)
def __call__(self, root, dirs, files):
@@ -74,17 +72,18 @@ class DvcIgnoreDirs(DvcIgnore):
class DvcIgnoreFilter(object):
- def __init__(self, root_dir):
+ def __init__(self, root_dir, tree):
+ self.tree = tree
self.ignores = {DvcIgnoreDirs([".git", ".hg", ".dvc"])}
self._update(root_dir)
- for root, dirs, _ in dvc_walk(root_dir, self):
+ for root, dirs, _ in self.tree.walk(root_dir, dvcignore=self):
for d in dirs:
self._update(os.path.join(root, d))
def _update(self, dirname):
ignore_file_path = os.path.join(dirname, DvcIgnore.DVCIGNORE_FILE)
- if os.path.exists(ignore_file_path):
- self.ignores.add(DvcIgnorePatterns(ignore_file_path))
+ if self.tree.exists(ignore_file_path):
+ self.ignores.add(DvcIgnorePatterns(ignore_file_path, self.tree))
def __call__(self, root, dirs, files):
for ignore in self.ignores:
diff --git a/dvc/repo/__init__.py b/dvc/repo/__init__.py
index 19d862a24..e0003b808 100644
--- a/dvc/repo/__init__.py
+++ b/dvc/repo/__init__.py
@@ -496,7 +496,7 @@ class Repo(object):
@cached_property
def dvcignore(self):
- return DvcIgnoreFilter(self.root_dir)
+ return DvcIgnoreFilter(self.root_dir, self.tree)
def close(self):
self.scm.close()
| dvcignore: not using tree
https://github.com/iterative/dvc/blob/0.66.6/dvc/ignore.py#L67
It is using os.walk, which means that no matter which branch we are checked into in the repo.tree, we are still using dvcignores from the current workspace. Need to make it use `repo.tree`. Probably by passing `self` instead of `self.root_dir` here https://github.com/iterative/dvc/blob/0.66.6/dvc/repo/__init__.py#L499 .
Discovered during the investigation for https://github.com/iterative/dvc/issues/2737 | iterative/dvc | diff --git a/tests/func/test_ignore.py b/tests/func/test_ignore.py
index 6e38c2b02..b09a06a78 100644
--- a/tests/func/test_ignore.py
+++ b/tests/func/test_ignore.py
@@ -1,3 +1,4 @@
+import itertools
import os
import shutil
@@ -8,6 +9,7 @@ from dvc.ignore import DvcIgnore
from dvc.ignore import DvcIgnoreDirs
from dvc.ignore import DvcIgnoreFilter
from dvc.ignore import DvcIgnorePatterns
+from dvc.scm.tree import WorkingTree
from dvc.utils.compat import cast_bytes
from dvc.utils.fs import get_mtime_and_size
from tests.basic_env import TestDvc
@@ -148,7 +150,37 @@ def test_ignore_collecting_dvcignores(repo_dir, dname):
)
repo_dir.create(ignore_file, repo_dir.FOO)
- assert DvcIgnoreFilter(repo_dir.root_dir).ignores == {
+ assert DvcIgnoreFilter(
+ repo_dir.root_dir, WorkingTree(repo_dir.root_dir)
+ ).ignores == {
DvcIgnoreDirs([".git", ".hg", ".dvc"]),
- DvcIgnorePatterns(top_ignore_file),
+ DvcIgnorePatterns(top_ignore_file, WorkingTree(repo_dir.root_dir)),
}
+
+
+def test_ignore_on_branch(git, dvc_repo, repo_dir):
+ dvc_repo.add(repo_dir.DATA_DIR)
+ dvc_repo.scm.commit("add data dir")
+
+ branch_name = "branch_one"
+ dvc_repo.scm.checkout(branch_name, create_new=True)
+
+ repo_dir.create(DvcIgnore.DVCIGNORE_FILE, to_posixpath(repo_dir.DATA_SUB))
+ dvc_repo.scm.add([DvcIgnore.DVCIGNORE_FILE])
+ git.index.commit("add ignore")
+
+ dvc_repo.scm.checkout("master")
+
+ git_tree = dvc_repo.scm.get_tree(branch_name)
+ branch_data_files = set(
+ itertools.chain.from_iterable(
+ [
+ files
+ for _, _, files in dvc_repo.tree.walk(
+ repo_dir.DATA_DIR,
+ dvcignore=DvcIgnoreFilter(repo_dir.root_dir, git_tree),
+ )
+ ]
+ )
+ )
+ assert branch_data_files == {"data"}
diff --git a/tests/func/test_tree.py b/tests/func/test_tree.py
index 8b351f757..c2ae316c2 100644
--- a/tests/func/test_tree.py
+++ b/tests/func/test_tree.py
@@ -108,8 +108,8 @@ class AssertWalkEqualMixin(object):
class TestWalkInNoSCM(AssertWalkEqualMixin, TestDir):
def test(self):
- dvcignore = DvcIgnoreFilter(self.root_dir)
tree = WorkingTree(self._root_dir)
+ dvcignore = DvcIgnoreFilter(self.root_dir, tree)
self.assertWalkEqual(
tree.walk(self._root_dir, dvcignore=dvcignore),
[
@@ -128,8 +128,8 @@ class TestWalkInNoSCM(AssertWalkEqualMixin, TestDir):
)
def test_subdir(self):
- dvcignore = DvcIgnoreFilter(self.root_dir)
tree = WorkingTree(self._root_dir)
+ dvcignore = DvcIgnoreFilter(self.root_dir, tree)
self.assertWalkEqual(
tree.walk(join("data_dir", "data_sub_dir"), dvcignore=dvcignore),
[
@@ -145,7 +145,7 @@ class TestWalkInNoSCM(AssertWalkEqualMixin, TestDir):
class TestWalkInGit(AssertWalkEqualMixin, TestGit):
def test_nobranch(self):
tree = WorkingTree(self._root_dir)
- dvcignore = DvcIgnoreFilter(self._root_dir)
+ dvcignore = DvcIgnoreFilter(self._root_dir, tree)
self.assertWalkEqual(
tree.walk(".", dvcignore=dvcignore),
[
diff --git a/tests/unit/test_ignore.py b/tests/unit/test_ignore.py
index 0eed6adc2..373f07424 100644
--- a/tests/unit/test_ignore.py
+++ b/tests/unit/test_ignore.py
@@ -1,20 +1,18 @@
import os
import pytest
+from mock import MagicMock
from mock import mock_open
from mock import patch
-import dvc
from dvc.ignore import DvcIgnoreDirs
from dvc.ignore import DvcIgnorePatterns
def mock_dvcignore(dvcignore_path, patterns):
-
- with patch.object(
- dvc.ignore, "open", mock_open(read_data="\n".join(patterns))
- ):
- ignore_patterns = DvcIgnorePatterns(dvcignore_path)
+ tree = MagicMock()
+ with patch.object(tree, "open", mock_open(read_data="\n".join(patterns))):
+ ignore_patterns = DvcIgnorePatterns(dvcignore_path, tree)
return ignore_patterns
diff --git a/tests/unit/utils/test_fs.py b/tests/unit/utils/test_fs.py
index 2cf460037..e5a18ba9c 100644
--- a/tests/unit/utils/test_fs.py
+++ b/tests/unit/utils/test_fs.py
@@ -7,6 +7,7 @@ from mock import patch
import dvc
from dvc.ignore import DvcIgnoreFilter
from dvc.path_info import PathInfo
+from dvc.scm.tree import WorkingTree
from dvc.system import System
from dvc.utils import relpath
from dvc.utils.compat import str
@@ -20,7 +21,7 @@ from tests.utils import spy
class TestMtimeAndSize(TestDir):
def test(self):
- dvcignore = DvcIgnoreFilter(self.root_dir)
+ dvcignore = DvcIgnoreFilter(self.root_dir, WorkingTree(self.root_dir))
file_time, file_size = get_mtime_and_size(self.DATA, dvcignore)
dir_time, dir_size = get_mtime_and_size(self.DATA_DIR, dvcignore)
@@ -124,7 +125,9 @@ def test_get_inode(repo_dir):
def test_path_object_and_str_are_valid_types_get_mtime_and_size(
path, repo_dir
):
- dvcignore = DvcIgnoreFilter(repo_dir.root_dir)
+ dvcignore = DvcIgnoreFilter(
+ repo_dir.root_dir, WorkingTree(repo_dir.root_dir)
+ )
time, size = get_mtime_and_size(path, dvcignore)
object_time, object_size = get_mtime_and_size(PathInfo(path), dvcignore)
assert time == object_time
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 2
} | 0.66 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-timeout",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"flaky",
"mock",
"xmltodict",
"awscli",
"google-compute-engine",
"Pygments",
"collective.checkdocs",
"flake8",
"psutil",
"flake8-docstrings",
"pydocstyle",
"jaraco.windows",
"mock-ssh-server",
"moto",
"rangehttpserver"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aliyun-python-sdk-core==2.16.0
aliyun-python-sdk-core-v3==2.13.33
aliyun-python-sdk-kms==2.16.5
appdirs==1.4.4
asciimatics==1.14.0
atpublic==3.1.2
attrs @ file:///croot/attrs_1668696182826/work
autocommand==2.2.2
awscli==1.31.13
azure-common==1.1.28
azure-storage-blob==2.1.0
azure-storage-common==2.1.0
bcrypt==4.2.1
boto==2.49.0
boto3==1.9.115
botocore==1.12.253
cachetools==4.2.4
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
charset-normalizer==3.4.1
collective.checkdocs==0.2
colorama==0.4.4
configobj==5.0.9
configparser==5.3.0
coverage==7.2.7
crcmod==1.7
cryptography==44.0.2
decorator==5.1.1
distro==1.9.0
docutils==0.15.2
-e git+https://github.com/iterative/dvc.git@c60b82d31d91fe29b661e144c70548a4f18d30d9#egg=dvc
execnet==2.0.2
flake8==5.0.4
flake8-docstrings==1.7.0
flaky==3.8.1
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
flufl.lock==7.1.1
funcy==2.0
future==1.0.0
gitdb==4.0.12
gitdb2==4.0.2
GitPython==3.1.44
google-api-core==2.10.2
google-auth==1.35.0
google-cloud-core==1.7.3
google-cloud-storage==1.19.0
google-compute-engine==2.8.13
google-crc32c==1.5.0
google-resumable-media==2.7.2
googleapis-common-protos==1.69.2
grandalf==0.6
humanize==4.6.0
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
inflect==6.0.5
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
jaraco.classes==3.2.3
jaraco.collections==4.2.0
jaraco.context==4.3.0
jaraco.functools==3.7.0
jaraco.structures==2.1.0
jaraco.text==3.11.1
jaraco.ui==2.3.0
jaraco.windows==5.7.0
Jinja2==3.1.6
jmespath==0.10.0
jsonpath-ng==1.7.0
MarkupSafe==2.1.5
mccabe==0.7.0
mock==5.2.0
mock-ssh-server==0.9.1
more-itertools==9.1.0
moto==4.2.14
nanotime==0.5.2
networkx==2.3
numpy==1.21.6
oss2==2.6.1
packaging @ file:///croot/packaging_1671697413597/work
paramiko==3.5.1
path==16.6.0
pathspec==0.11.2
Pillow==9.5.0
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
ply==3.11
protobuf==4.24.4
psutil==7.0.0
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyarrow==0.14.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycodestyle==2.9.1
pycparser==2.21
pycryptodome==3.22.0
pydantic==1.10.21
pydocstyle==6.3.0
pyfiglet==0.8.post1
pyflakes==2.5.0
Pygments==2.17.2
PyNaCl==1.5.0
pyparsing==3.1.4
pytest==7.1.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-timeout==2.3.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
PyYAML==6.0.1
rangehttpserver==1.4.0
requests==2.31.0
responses==0.23.3
rsa==4.7.2
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.8
s3transfer==0.2.1
schema==0.7.7
shortuuid==1.0.13
six==1.17.0
smmap==5.0.2
snowballstemmer==2.2.0
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
tqdm==4.67.1
treelib==1.7.1
types-PyYAML==6.0.12.12
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3==1.25.11
wcwidth==0.2.13
Werkzeug==2.2.3
xmltodict==0.14.2
zipp @ file:///croot/zipp_1672387121353/work
| name: dvc
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- aliyun-python-sdk-core==2.16.0
- aliyun-python-sdk-core-v3==2.13.33
- aliyun-python-sdk-kms==2.16.5
- appdirs==1.4.4
- asciimatics==1.14.0
- atpublic==3.1.2
- autocommand==2.2.2
- awscli==1.31.13
- azure-common==1.1.28
- azure-storage-blob==2.1.0
- azure-storage-common==2.1.0
- bcrypt==4.2.1
- boto==2.49.0
- boto3==1.9.115
- botocore==1.12.253
- cachetools==4.2.4
- cffi==1.15.1
- charset-normalizer==3.4.1
- collective-checkdocs==0.2
- colorama==0.4.4
- configobj==5.0.9
- configparser==5.3.0
- coverage==7.2.7
- crcmod==1.7
- cryptography==44.0.2
- decorator==5.1.1
- distro==1.9.0
- docutils==0.15.2
- dvc==0.66.9
- execnet==2.0.2
- flake8==5.0.4
- flake8-docstrings==1.7.0
- flaky==3.8.1
- flufl-lock==7.1.1
- funcy==2.0
- future==1.0.0
- gitdb==4.0.12
- gitdb2==4.0.2
- gitpython==3.1.44
- google-api-core==2.10.2
- google-auth==1.35.0
- google-cloud-core==1.7.3
- google-cloud-storage==1.19.0
- google-compute-engine==2.8.13
- google-crc32c==1.5.0
- google-resumable-media==2.7.2
- googleapis-common-protos==1.69.2
- grandalf==0.6
- humanize==4.6.0
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- inflect==6.0.5
- jaraco-classes==3.2.3
- jaraco-collections==4.2.0
- jaraco-context==4.3.0
- jaraco-functools==3.7.0
- jaraco-structures==2.1.0
- jaraco-text==3.11.1
- jaraco-ui==2.3.0
- jaraco-windows==5.7.0
- jinja2==3.1.6
- jmespath==0.10.0
- jsonpath-ng==1.7.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mock==5.2.0
- mock-ssh-server==0.9.1
- more-itertools==9.1.0
- moto==4.2.14
- nanotime==0.5.2
- networkx==2.3
- numpy==1.21.6
- oss2==2.6.1
- paramiko==3.5.1
- path==16.6.0
- pathspec==0.11.2
- pillow==9.5.0
- ply==3.11
- protobuf==4.24.4
- psutil==7.0.0
- pyarrow==0.14.0
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycodestyle==2.9.1
- pycparser==2.21
- pycryptodome==3.22.0
- pydantic==1.10.21
- pydocstyle==6.3.0
- pyfiglet==0.8.post1
- pyflakes==2.5.0
- pygments==2.17.2
- pynacl==1.5.0
- pyparsing==3.1.4
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-timeout==2.3.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- pyyaml==6.0.1
- rangehttpserver==1.4.0
- requests==2.31.0
- responses==0.23.3
- rsa==4.7.2
- ruamel-yaml==0.18.10
- ruamel-yaml-clib==0.2.8
- s3transfer==0.2.1
- schema==0.7.7
- shortuuid==1.0.13
- six==1.17.0
- smmap==5.0.2
- snowballstemmer==2.2.0
- tqdm==4.67.1
- treelib==1.7.1
- types-pyyaml==6.0.12.12
- urllib3==1.25.11
- wcwidth==0.2.13
- werkzeug==2.2.3
- xmltodict==0.14.2
prefix: /opt/conda/envs/dvc
| [
"tests/func/test_ignore.py::test_ignore_collecting_dvcignores[data_dir]",
"tests/func/test_ignore.py::test_ignore_collecting_dvcignores[data_dir/data_sub_dir]",
"tests/func/test_tree.py::TestWalkInNoSCM::test",
"tests/func/test_tree.py::TestWalkInNoSCM::test_subdir",
"tests/func/test_tree.py::TestWalkInGit::test_nobranch",
"tests/unit/test_ignore.py::test_ignore_from_file_should_filter_dirs_and_files",
"tests/unit/test_ignore.py::test_match_ignore_from_file[to_ignore-patterns0-True]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[to_ignore.txt-patterns1-True]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[rel/p/p2/to_ignore-patterns2-True]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[/full/path/to/ignore/file/to_ignore-patterns3-True]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[to_ignore.txt-patterns4-True]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[rel/path/path2/to_ignore-patterns5-False]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[path/to_ignore.txt-patterns6-False]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[rel/path/path2/dont_ignore-patterns7-False]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[dont_ignore.txt-patterns8-False]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[dont_ignore.txt-patterns9-False]",
"tests/unit/test_ignore.py::test_match_ignore_from_file[../../../something.txt-patterns10-False]",
"tests/unit/utils/test_fs.py::TestMtimeAndSize::test",
"tests/unit/utils/test_fs.py::test_path_object_and_str_are_valid_types_get_mtime_and_size[data_dir/data]",
"tests/unit/utils/test_fs.py::test_path_object_and_str_are_valid_types_get_mtime_and_size[data_dir]"
] | [
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_child_dir_unicode",
"tests/func/test_ignore.py::TestDvcIgnore::test_ignore_in_parent_dir"
] | [
"tests/func/test_tree.py::TestWorkingTree::test_exists",
"tests/func/test_tree.py::TestWorkingTree::test_isdir",
"tests/func/test_tree.py::TestWorkingTree::test_isfile",
"tests/func/test_tree.py::TestWorkingTree::test_open",
"tests/func/test_tree.py::TestGitTree::test_exists",
"tests/func/test_tree.py::TestGitTree::test_isdir",
"tests/func/test_tree.py::TestGitTree::test_isfile",
"tests/func/test_tree.py::TestGitTree::test_open",
"tests/func/test_tree.py::TestGitSubmoduleTree::test_exists",
"tests/func/test_tree.py::TestGitSubmoduleTree::test_isdir",
"tests/func/test_tree.py::TestGitSubmoduleTree::test_isfile",
"tests/func/test_tree.py::TestGitSubmoduleTree::test_open",
"tests/func/test_tree.py::TestWalkInGit::test_branch",
"tests/unit/test_ignore.py::test_should_ignore_dir[.git]",
"tests/unit/test_ignore.py::test_should_ignore_dir[.hg]",
"tests/unit/test_ignore.py::test_should_ignore_dir[.dvc]",
"tests/unit/utils/test_fs.py::TestContainsLink::test_path_object_and_str_are_valid_arg_types",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_call_recursive_on_no_condition_matched",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_raise_exception_on_base_path_not_in_path",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_return_false_on_no_more_dirs_below_path",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_return_false_on_path_eq_to_base_path",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_return_false_when_base_path_is_symlink",
"tests/unit/utils/test_fs.py::TestContainsLink::test_should_return_true_on_symlink_in_path",
"tests/unit/utils/test_fs.py::test_get_inode"
] | [] | Apache License 2.0 | 5,767 | 673 | [
"dvc/ignore.py",
"dvc/repo/__init__.py"
] |
|
hdmf-dev__hdmf-194 | 9f9edf89ebae063fd27e0c8e2da42b0e46adcc2a | 2019-11-07 22:34:28 | ee1684a8a4ba8a4d70fb5ba4e78e1998d92c8ba1 | codecov[bot]: # [Codecov](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=h1) Report
> Merging [#194](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=desc) into [dev](https://codecov.io/gh/hdmf-dev/hdmf/commit/dd0b4666c6155d2859eb16151510b333df07f152?src=pr&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `90%`.
[](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## dev #194 +/- ##
==========================================
+ Coverage 70.42% 70.45% +0.03%
==========================================
Files 30 30
Lines 5907 5914 +7
Branches 1382 1386 +4
==========================================
+ Hits 4160 4167 +7
Misses 1316 1316
Partials 431 431
```
| [Impacted Files](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/hdmf/build/map.py](https://codecov.io/gh/hdmf-dev/hdmf/pull/194/diff?src=pr&el=tree#diff-c3JjL2hkbWYvYnVpbGQvbWFwLnB5) | `67.61% <100%> (ø)` | :arrow_up: |
| [src/hdmf/utils.py](https://codecov.io/gh/hdmf-dev/hdmf/pull/194/diff?src=pr&el=tree#diff-c3JjL2hkbWYvdXRpbHMucHk=) | `89.61% <87.5%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=footer). Last update [dd0b466...da00e0f](https://codecov.io/gh/hdmf-dev/hdmf/pull/194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/src/hdmf/build/builders.py b/src/hdmf/build/builders.py
index e92e481..8bc6631 100644
--- a/src/hdmf/build/builders.py
+++ b/src/hdmf/build/builders.py
@@ -272,9 +272,9 @@ class GroupBuilder(BaseBuilder):
returns='the DatasetBuilder object for the dataset', rtype='DatasetBuilder')
def add_dataset(self, **kwargs):
''' Create a dataset and add it to this group '''
+ kwargs['parent'] = self
+ kwargs['source'] = self.source
pargs, pkwargs = fmt_docval_args(DatasetBuilder.__init__, kwargs)
- pkwargs['parent'] = self
- pkwargs['source'] = self.source
builder = DatasetBuilder(*pargs, **pkwargs)
self.set_dataset(builder)
return builder
diff --git a/src/hdmf/build/map.py b/src/hdmf/build/map.py
index a5104af..2fb4e74 100644
--- a/src/hdmf/build/map.py
+++ b/src/hdmf/build/map.py
@@ -7,7 +7,7 @@ from copy import copy, deepcopy
from datetime import datetime
from six import with_metaclass, raise_from, text_type, binary_type, integer_types
-from ..utils import docval, getargs, ExtenderMeta, get_docval, fmt_docval_args, call_docval_func
+from ..utils import docval, getargs, ExtenderMeta, get_docval, call_docval_func, fmt_docval_args
from ..container import AbstractContainer, Container, Data, DataRegion
from ..spec import Spec, AttributeSpec, DatasetSpec, GroupSpec, LinkSpec, NAME_WILDCARD, NamespaceCatalog, RefSpec,\
SpecReader
@@ -1448,15 +1448,17 @@ class TypeMap(object):
fields.append({'name': f, 'child': True})
else:
fields.append(f)
- if name is not None:
+
+ if name is not None: # fixed name is specified in spec, remove it from docval args
docval_args = filter(lambda x: x['name'] != 'name', docval_args)
@docval(*docval_args)
def __init__(self, **kwargs):
- pargs, pkwargs = fmt_docval_args(base.__init__, kwargs)
if name is not None:
- pkwargs.update(name=name)
- base.__init__(self, *pargs, **pkwargs)
+ kwargs.update(name=name)
+ pargs, pkwargs = fmt_docval_args(base.__init__, kwargs)
+ base.__init__(self, *pargs, **pkwargs) # special case: need to pass self to __init__
+
for f in new_args:
arg_val = kwargs.get(f, None)
if arg_val is not None:
diff --git a/src/hdmf/container.py b/src/hdmf/container.py
index 856e1e5..2831aa8 100644
--- a/src/hdmf/container.py
+++ b/src/hdmf/container.py
@@ -226,7 +226,7 @@ class AbstractContainer(with_metaclass(ExtenderMeta, object)):
parent_container.__children.append(self)
parent_container.set_modified()
else:
- self.__parent.add_candidate(parent_container, self)
+ self.__parent.add_candidate(parent_container)
else:
self.__parent = parent_container
if isinstance(parent_container, Container):
diff --git a/src/hdmf/monitor.py b/src/hdmf/monitor.py
index 6fa9b95..0c49a42 100644
--- a/src/hdmf/monitor.py
+++ b/src/hdmf/monitor.py
@@ -1,7 +1,7 @@
from abc import ABCMeta, abstractmethod
import six
-from .utils import docval, getargs, fmt_docval_args
+from .utils import docval, getargs, call_docval_func
from .data_utils import AbstractDataChunkIterator, DataChunkIterator, DataChunk
@@ -62,8 +62,7 @@ class DataChunkProcessor(AbstractDataChunkIterator):
class NumSampleCounter(DataChunkProcessor):
def __init__(self, **kwargs):
- args, kwargs = fmt_docval_args(DataChunkProcessor.__init__, kwargs)
- super(NumSampleCounter, self).__init__(*args, **kwargs)
+ call_docval_func(super(NumSampleCounter, self).__init__, kwargs)
self.__sample_count = 0
@docval({'name': 'data_chunk', 'type': DataChunk, 'doc': 'a chunk to process'})
diff --git a/src/hdmf/utils.py b/src/hdmf/utils.py
index c4f7516..289c182 100644
--- a/src/hdmf/utils.py
+++ b/src/hdmf/utils.py
@@ -128,6 +128,8 @@ def __parse_args(validator, args, kwargs, enforce_type=True, enforce_shape=True,
:param enforce_type: Boolean indicating whether the type of arguments should be enforced
:param enforce_shape: Boolean indicating whether the dimensions of array arguments
should be enforced if possible.
+ :param allow_extra: Boolean indicating whether extra keyword arguments are allowed (if False and extra keyword
+ arguments are specified, then an error is raised).
:return: Dict with:
* 'args' : Dict all arguments where keys are the names and values are the values of the arguments.
@@ -145,22 +147,36 @@ def __parse_args(validator, args, kwargs, enforce_type=True, enforce_shape=True,
if duplicated:
raise ValueError('The following names are duplicated: {}'.format(duplicated))
try:
+ if allow_extra: # extra keyword arguments are allowed so do not consider them when checking number of args
+ # verify only that the number of positional args is <= number of docval specified args
+ if len(args) > len(validator):
+ raise TypeError('Expected at most %s arguments, got %s' % (len(validator), len(args)))
+ else: # verify that the number of positional args + keyword args is <= number of docval specified args
+ if (len(args) + len(kwargs)) > len(validator):
+ raise TypeError('Expected at most %s arguments, got %s' % (len(validator), len(args) + len(kwargs)))
+
+ # iterate through the docval specification and find a matching value in args / kwargs
it = iter(validator)
arg = next(it)
+
# catch unsupported keys
allowable_terms = ('name', 'doc', 'type', 'shape', 'default', 'help')
unsupported_terms = set(arg.keys()) - set(allowable_terms)
if unsupported_terms:
raise ValueError('docval for {}: {} are not supported by docval'.format(arg['name'],
list(unsupported_terms)))
- # process positional arguments
+ # process positional arguments of the docval specification (no default value)
while True:
- #
if 'default' in arg:
break
argname = arg['name']
argval_set = False
if argname in kwargs:
+ # if this positional arg is specified by a keyword arg and there are remaining positional args that
+ # have not yet been matched, then it is undetermined what those positional args match to. thus, raise
+ # an error
+ if argsi < len(args):
+ type_errors.append("got multiple values for argument '%s'" % argname)
argval = kwargs.get(argname)
extras.pop(argname, None)
argval_set = True
@@ -171,36 +187,35 @@ def __parse_args(validator, args, kwargs, enforce_type=True, enforce_shape=True,
if not argval_set:
type_errors.append("missing argument '%s'" % argname)
else:
- if argname in ret:
- type_errors.append("'got multiple arguments for '%s" % argname)
- else:
- if enforce_type:
- if not __type_okay(argval, arg['type']):
- if argval is None:
- fmt_val = (argname, __format_type(arg['type']))
- type_errors.append("None is not allowed for '%s' (expected '%s', not None)" % fmt_val)
- else:
- fmt_val = (argname, type(argval).__name__, __format_type(arg['type']))
- type_errors.append("incorrect type for '%s' (got '%s', expected '%s')" % fmt_val)
- if enforce_shape and 'shape' in arg:
+ if enforce_type:
+ if not __type_okay(argval, arg['type']):
+ if argval is None:
+ fmt_val = (argname, __format_type(arg['type']))
+ type_errors.append("None is not allowed for '%s' (expected '%s', not None)" % fmt_val)
+ else:
+ fmt_val = (argname, type(argval).__name__, __format_type(arg['type']))
+ type_errors.append("incorrect type for '%s' (got '%s', expected '%s')" % fmt_val)
+ if enforce_shape and 'shape' in arg:
+ valshape = get_data_shape(argval)
+ while valshape is None:
+ if argval is None:
+ break
+ if not hasattr(argval, argname):
+ fmt_val = (argval, argname, arg['shape'])
+ value_errors.append("cannot check shape of object '%s' for argument '%s' "
+ "(expected shape '%s')" % fmt_val)
+ break
+ # unpack, e.g. if TimeSeries is passed for arg 'data', then TimeSeries.data is checked
+ argval = getattr(argval, argname)
valshape = get_data_shape(argval)
- while valshape is None:
- if argval is None:
- break
- if not hasattr(argval, argname):
- fmt_val = (argval, argname, arg['shape'])
- value_errors.append("cannot check shape of object '%s' for argument '%s' "
- "(expected shape '%s')" % fmt_val)
- break
- # unpack, e.g. if TimeSeries is passed for arg 'data', then TimeSeries.data is checked
- argval = getattr(argval, argname)
- valshape = get_data_shape(argval)
- if valshape is not None and not __shape_okay_multi(argval, arg['shape']):
- fmt_val = (argname, valshape, arg['shape'])
- value_errors.append("incorrect shape for '%s' (got '%s', expected '%s')" % fmt_val)
- ret[argname] = argval
+ if valshape is not None and not __shape_okay_multi(argval, arg['shape']):
+ fmt_val = (argname, valshape, arg['shape'])
+ value_errors.append("incorrect shape for '%s' (got '%s', expected '%s')" % fmt_val)
+ ret[argname] = argval
argsi += 1
arg = next(it)
+
+ # process arguments of the docval specification with a default value
while True:
argname = arg['name']
if argname in kwargs:
| Extra/duplicate args allowed in docval
1. If docval specifies three arguments and the user provides four positional arguments, docval should throw an error.
2. In Python, the below code throws an error:
```python
>>> def fn(a, b):
... print(a, b)
...
>>> fn(1, 2, a=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: fn() got multiple values for argument 'a'
```
Here, a passed keyword argument captures a positional argument, thus making the other values uninterpretable. docval should throw a similar error but doesn't. The analogous docval definition of the above function would set a = 3, b = 2 and ignore the first argument (1).
This lack of errors is relevant for when function headers of an API are changed and users are unaware that their API calls no longer work as intended. | hdmf-dev/hdmf | diff --git a/tests/unit/test_io_hdf5_h5tools.py b/tests/unit/test_io_hdf5_h5tools.py
index cd52436..ad37fbb 100644
--- a/tests/unit/test_io_hdf5_h5tools.py
+++ b/tests/unit/test_io_hdf5_h5tools.py
@@ -1034,7 +1034,7 @@ class HDF5IOReadData(unittest.TestCase):
self.path = get_temp_filepath()
foo1 = Foo('foo1', [0, 1, 2, 3, 4], "I am foo1", 17, 3.14)
bucket1 = FooBucket('test_bucket1', [foo1])
- self.foofile1 = FooFile('test_foofile1', buckets=[bucket1])
+ self.foofile1 = FooFile(buckets=[bucket1])
with HDF5IO(self.path, manager=_get_manager(), mode='w') as temp_io:
temp_io.write(self.foofile1)
@@ -1069,7 +1069,7 @@ class HDF5IOWriteNoFile(unittest.TestCase):
def setUp(self):
foo1 = Foo('foo1', [0, 1, 2, 3, 4], "I am foo1", 17, 3.14)
bucket1 = FooBucket('test_bucket1', [foo1])
- self.foofile1 = FooFile('test_foofile1', buckets=[bucket1])
+ self.foofile1 = FooFile(buckets=[bucket1])
self.path = 'test_write_nofile.h5'
def tearDown(self):
diff --git a/tests/unit/utils_test/test_docval.py b/tests/unit/utils_test/test_docval.py
index 62d19fa..ebe23d1 100644
--- a/tests/unit/utils_test/test_docval.py
+++ b/tests/unit/utils_test/test_docval.py
@@ -31,6 +31,13 @@ class MyTestClass(object):
def basic_only_kw(self, **kwargs):
return kwargs
+ @docval({'name': 'arg1', 'type': str, 'doc': 'argument1 is a str'},
+ {'name': 'arg2', 'type': 'int', 'doc': 'argument2 is a int'},
+ {'name': 'arg3', 'type': bool, 'doc': 'argument3 is a bool. it defaults to False', 'default': False},
+ allow_extra=True)
+ def basic_add2_kw_allow_extra(self, **kwargs):
+ return kwargs
+
class MyTestSubclass(MyTestClass):
@@ -350,6 +357,57 @@ class TestDocValidator(unittest.TestCase):
with self.assertRaises(TypeError):
self.test_obj.basic_add2_kw('a string', 100, bar=1000)
+ def test_extra_args_pos_only(self):
+ """Test that docval raises an error if too many positional
+ arguments are specified
+ """
+ with self.assertRaisesRegex(TypeError, r'Expected at most 3 arguments, got 4'):
+ self.test_obj.basic_add2_kw('a string', 100, True, 'extra')
+
+ def test_extra_args_pos_kw(self):
+ """Test that docval raises an error if too many positional
+ arguments are specified and a keyword arg is specified
+ """
+ with self.assertRaisesRegex(TypeError, r'Expected at most 3 arguments, got 4'):
+ self.test_obj.basic_add2_kw('a string', 'extra', 100, arg3=True)
+
+ def test_extra_kwargs_pos_kw(self):
+ """Test that docval raises an error if extra keyword
+ arguments are specified
+ """
+ with self.assertRaisesRegex(TypeError, r'Expected at most 3 arguments, got 4'):
+ self.test_obj.basic_add2_kw('a string', 100, extra='extra', arg3=True)
+
+ def test_extra_args_pos_only_ok(self):
+ """Test that docval raises an error if too many positional
+ arguments are specified even if allow_extra is True
+ """
+ with self.assertRaisesRegex(TypeError, r'Expected at most 3 arguments, got 4'):
+ self.test_obj.basic_add2_kw_allow_extra('a string', 100, True, 'extra', extra='extra')
+
+ def test_extra_args_pos_kw_ok(self):
+ """Test that docval does not raise an error if too many
+ keyword arguments are specified and allow_extra is True
+ """
+ kwargs = self.test_obj.basic_add2_kw_allow_extra('a string', 100, True, extra='extra')
+ self.assertDictEqual(kwargs, {'arg1': 'a string', 'arg2': 100, 'arg3': True, 'extra': 'extra'})
+
+ def test_dup_kw(self):
+ """Test that docval raises an error if a keyword argument
+ captures a positional argument before all positional
+ arguments have been resolved
+ """
+ with self.assertRaisesRegex(TypeError, r"got multiple values for argument 'arg1'"):
+ self.test_obj.basic_add2_kw('a string', 100, arg1='extra')
+
+ def test_extra_args_dup_kw(self):
+ """Test that docval raises an error if a keyword argument
+ captures a positional argument before all positional
+ arguments have been resolved and allow_extra is True
+ """
+ with self.assertRaisesRegex(TypeError, r"got multiple values for argument 'arg1'"):
+ self.test_obj.basic_add2_kw_allow_extra('a string', 100, True, arg1='extra')
+
def test_unsupported_docval_term(self):
"""Test that docval does not allow setting of arguments
marked as unsupported
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 2
},
"num_modified_files": 5
} | 1.3 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"coverage"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.7",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | certifi @ file:///croot/certifi_1671487769961/work/certifi
chardet==3.0.4
coverage==7.2.7
exceptiongroup==1.2.2
h5py==2.10.0
-e git+https://github.com/hdmf-dev/hdmf.git@9f9edf89ebae063fd27e0c8e2da42b0e46adcc2a#egg=hdmf
importlib-metadata==6.7.0
iniconfig==2.0.0
numpy==1.17.2
packaging==24.0
pandas==0.25.1
pluggy==1.2.0
pytest==7.4.4
python-dateutil==2.9.0.post0
pytz==2025.2
ruamel.yaml==0.16.5
ruamel.yaml.clib==0.2.8
scipy==1.3.1
six==1.12.0
tomli==2.0.1
typing_extensions==4.7.1
zipp==3.15.0
| name: hdmf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- chardet==3.0.4
- coverage==7.2.7
- exceptiongroup==1.2.2
- h5py==2.10.0
- importlib-metadata==6.7.0
- iniconfig==2.0.0
- numpy==1.17.2
- packaging==24.0
- pandas==0.25.1
- pluggy==1.2.0
- pytest==7.4.4
- python-dateutil==2.9.0.post0
- pytz==2025.2
- ruamel-yaml==0.16.5
- ruamel-yaml-clib==0.2.8
- scipy==1.3.1
- six==1.12.0
- tomli==2.0.1
- typing-extensions==4.7.1
- zipp==3.15.0
prefix: /opt/conda/envs/hdmf
| [
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_dup_kw",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_args_dup_kw",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_args_pos_kw",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_args_pos_only",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_args_pos_only_ok",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_kwargs_pos_kw"
] | [] | [
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test__chunked_iter_fill",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_copy_h5py_dataset_h5dataio_input",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_copy_h5py_dataset_input",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_dci_h5dataset",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_dci_h5dataset_scalar",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_dci_h5dataset_sparse_matched",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_dci_h5dataset_sparse_unmatched",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_error_on_unsupported_compression_filter",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_h5dataio_array_conversion_datachunkiterator",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_h5dataio_array_conversion_list",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_h5dataio_array_conversion_numpy",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_link_h5py_dataset_h5dataio_input",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_link_h5py_dataset_input",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_list_fill_empty",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_list_fill_empty_no_dtype",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_pass_through_of_recommended_chunks",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_value_error_on_incompatible_compression_opts",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_warning_on_linking_of_regular_array",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_warning_on_non_gzip_compression",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_warning_on_setting_io_options_on_h5dataset_input",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_data_chunk_iterator",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_data_chunk_iterator_with_compression",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_iterable",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_iterable_multidimensional_array",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_iterable_multidimensional_array_compression",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_chunked",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_compress_gzip",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_compress_lzf",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_compress_szip",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_disable_default_compress",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_enable_default_compress",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_list_fillvalue",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_scalar",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_dataset_string",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_multi_dci_conc",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_multi_dci_oaat",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_table",
"tests/unit/test_io_hdf5_h5tools.py::H5IOTest::test_write_table_nested",
"tests/unit/test_io_hdf5_h5tools.py::TestRoundTrip::test_roundtrip_basic",
"tests/unit/test_io_hdf5_h5tools.py::TestRoundTrip::test_roundtrip_empty_dataset",
"tests/unit/test_io_hdf5_h5tools.py::TestRoundTrip::test_roundtrip_empty_group",
"tests/unit/test_io_hdf5_h5tools.py::TestHDF5IO::test_constructor",
"tests/unit/test_io_hdf5_h5tools.py::TestHDF5IO::test_set_file_mismatch",
"tests/unit/test_io_hdf5_h5tools.py::TestCacheSpec::test_cache_spec",
"tests/unit/test_io_hdf5_h5tools.py::TestCacheSpec::test_double_cache_spec",
"tests/unit/test_io_hdf5_h5tools.py::TestNoCacheSpec::test_no_cache_spec",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOMultiFileTest::test_copy_file_with_external_links",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitNoFileTest::test_init_no_file_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitNoFileTest::test_init_no_file_r",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitNoFileTest::test_init_no_file_rplus",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitFileExistsTest::test_init_file_exists_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitFileExistsTest::test_init_wminus_file_exists",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOInitFileExistsTest::test_init_x_file_exists",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOReadNoDataTest::test_read_no_data_a",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOReadNoDataTest::test_read_no_data_r",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOReadNoDataTest::test_read_no_data_rplus",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOReadData::test_read_file_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOReadData::test_read_file_w",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteNoFile::test_write_no_file_a_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteNoFile::test_write_no_file_w_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteNoFile::test_write_no_file_wminus_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteNoFile::test_write_no_file_x_ok",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteFileExists::test_write_a",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteFileExists::test_write_r",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteFileExists::test_write_rplus",
"tests/unit/test_io_hdf5_h5tools.py::HDF5IOWriteFileExists::test_write_w",
"tests/unit/test_io_hdf5_h5tools.py::H5DataIOValid::test_link",
"tests/unit/test_io_hdf5_h5tools.py::H5DataIOValid::test_read_valid",
"tests/unit/test_io_hdf5_h5tools.py::H5DataIOValid::test_valid",
"tests/unit/test_io_hdf5_h5tools.py::TestReadLink::test_link_to_link",
"tests/unit/test_io_hdf5_h5tools.py::TestReadLink::test_set_link_loc",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_bad_shape",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_bad_type",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_catch_duplicate_names",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_all_kw_syntax",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_default",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_default_sub",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_default_sub_missing_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_kw_syntax",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_kwsyntax_sub",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_kwsyntax_sub_missing_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_kwsyntax_sub_nonetype_arg",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_pos_syntax",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_kw_pos_syntax_missing_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_pos_as_kw",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_text_type_w_str",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add2_text_type_w_unicode",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add_kw",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add_missing_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_docval_add_sub",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_args_pos_kw_ok",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_extra_kwarg",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_fmt_docval_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_all",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_missing_arg",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_missing_arg_of_many_ok",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_missing_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_none",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_none_arg",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_one_arg",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_get_docval_two_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_multi_shape",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_arg1_arg2",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_arg1_arg2_pos",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_arg1_no_arg2",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_arg1_pos_no_arg2",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_arg2_no_arg1",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_only_kw_no_args",
"tests/unit/utils_test/test_docval.py::TestDocValidator::test_unsupported_docval_term",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_invalid_unpack",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_invalid_unpack_default",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_none_unpack",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_none_unpack_default",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_other_unpack",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_other_unpack_default",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_valid_unpack",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_shape_valid_unpack_default",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_type_arg",
"tests/unit/utils_test/test_docval.py::TestDocValidatorChain::test_type_arg_wrong_type"
] | [] | BSD-3-Clause | 5,769 | 2,572 | [
"src/hdmf/build/builders.py",
"src/hdmf/build/map.py",
"src/hdmf/container.py",
"src/hdmf/monitor.py",
"src/hdmf/utils.py"
] |
jriddy__zerial-36 | 30867c7d8f6565c180bb34560a5f3b2393896c8a | 2019-11-08 20:08:59 | 30867c7d8f6565c180bb34560a5f3b2393896c8a | diff --git a/src/zerial/numpy.py b/src/zerial/numpy.py
index e5f936e..5431318 100644
--- a/src/zerial/numpy.py
+++ b/src/zerial/numpy.py
@@ -33,8 +33,6 @@ if sys.version_info >= (3, 0):
return x.decode()
else:
return str(x)
-
- buffer_to_ndarray = np.frombuffer
else:
numpy_required_passthrus = DEFAULT_PASSTHRUS
@@ -47,14 +45,14 @@ else:
num_to_bytes = lambda obj: memoryview(obj.data)
- buffer_to_ndarray = lambda v, *xs, **kw: np.frombuffer(
- v.tobytes() if isinstance(v, memoryview) else v,
- *xs,
- **kw
- )
+ tostr = lambda x: x
- def tostr(x):
- return x
+
+def buffer_to_ndarray(v, *xs, **kw):
+ v = v.tobytes() if isinstance(v, memoryview) else v
+ # We have to copy to get an "owned", writable array
+ # TODO: find a better way around this that isn't as expensive
+ return np.frombuffer(v, *xs, **kw).copy()
@attr.s
| Numpy arrays coming from serialization are not writable
Looks like `np.frombuffer` keeps the `owndata` flag as `False`, which prevents the array from becoming writable. | jriddy/zerial | diff --git a/tests/test_numpy.py b/tests/test_numpy.py
new file mode 100644
index 0000000..9f46692
--- /dev/null
+++ b/tests/test_numpy.py
@@ -0,0 +1,82 @@
+import pytest
+
+import attr
+import numpy as np
+
+from zerial.numpy import NdarrayEncoder, the_numpy_structurer
+
+
[email protected]
+def encoder():
+ return NdarrayEncoder()
+
+
[email protected]
+def npztr():
+ return the_numpy_structurer
+
+
+def test_encoder_barfs_on_refcounted_python_objects_in_array(encoder, npztr):
+ arr = np.array([object()])
+ with pytest.raises(TypeError) as ce:
+ encoder.encode(arr, npztr)
+ assert str(ce.value).startswith("cowardly refusing")
+
+
[email protected]('arr', [
+ np.array([]),
+ np.empty((1024, 256, 16), dtype=np.float64),
+ np.random.random((10, 10, 10)),
+ np.random.random((5, 15, 2)).T,
+ # TODO: add more tests here
+])
+def test_encode_decode_stability(arr, encoder, npztr):
+ encoded = encoder.encode(arr, npztr)
+ if isinstance(encoded['%data'], memoryview):
+ # force memory read
+ encoded['%data'] = encoded['%data'].tobytes()
+ decoded = encoder.decode(encoded, npztr)
+ assert (arr == decoded).all()
+
+
+def test_encodes_toplevel_when_given_as_type(npztr, encoder):
+ arr = np.array(range(10), dtype=np.int_)
+ des = npztr.destructure(arr, np.ndarray)
+ enc = encoder.encode(arr, npztr)
+ assert des == enc
+
+
+def test_encodes_toplevel_fails_when_not_given_as_type(npztr, encoder):
+ arr = np.linspace(0, 1)
+ with pytest.raises(attr.exceptions.NotAnAttrsClassError):
+ npztr.destructure(arr)
+
+
[email protected](eq=False)
+class Example(object):
+ arr = attr.ib(type=np.ndarray)
+ other = attr.ib(type=str, default='')
+
+ def __eq__(self, other):
+ return (
+ self.__class__ == other.__class__ and
+ self.other == other.other and
+ (self.arr == other.arr).all()
+ )
+
+
+def test_can_encode_structured_ndarray(npztr):
+ example = Example(np.array([1, 19, 18]))
+ destructed = npztr.destructure(example)
+ restructed = npztr.restructure(Example, destructed)
+ assert example == restructed
+
+
+def test_restructured_array_is_writable(npztr):
+ arr = np.array([1, 2], dtype=np.int_)
+ example = Example(arr)
+ destructured = npztr.destructure(example)
+ restructured = npztr.restructure(Example, destructured)
+ new_arr = restructured.arr
+ new_arr += 2
+ assert ((arr + 2) == new_arr).all()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 1
} | 0.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs @ file:///opt/conda/conda-bld/attrs_1642510447205/work
Babel==2.11.0
bump2version==1.0.1
certifi==2021.5.30
charset-normalizer==2.0.12
docutils==0.17.1
flake8==5.0.4
idna==3.10
imagesize==1.4.1
importlib-metadata==4.2.0
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
Jinja2==3.0.3
MarkupSafe==2.0.1
mccabe==0.7.0
more-itertools @ file:///tmp/build/80754af9/more-itertools_1637733554872/work
numpy==1.19.5
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
pluggy @ file:///tmp/build/80754af9/pluggy_1615976315926/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pycodestyle==2.9.1
pyflakes==2.5.0
Pygments==2.14.0
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
pytest==6.2.4
pytest-mock==3.6.1
pytz==2025.2
requests==2.27.1
snowballstemmer==2.2.0
Sphinx==4.3.2
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
typing_extensions @ file:///opt/conda/conda-bld/typing_extensions_1647553014482/work
urllib3==1.26.20
-e git+https://github.com/jriddy/zerial.git@30867c7d8f6565c180bb34560a5f3b2393896c8a#egg=zerial
zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work
| name: zerial
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=21.4.0=pyhd3eb1b0_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- more-itertools=8.12.0=pyhd3eb1b0_0
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.2=py36h06a4308_0
- pluggy=0.13.1=py36h06a4308_0
- py=1.11.0=pyhd3eb1b0_0
- pyparsing=3.0.4=pyhd3eb1b0_0
- pytest=6.2.4=py36h06a4308_2
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- toml=0.10.2=pyhd3eb1b0_0
- typing_extensions=4.1.1=pyh06a4308_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- babel==2.11.0
- bump2version==1.0.1
- charset-normalizer==2.0.12
- docutils==0.17.1
- flake8==5.0.4
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.2.0
- jinja2==3.0.3
- markupsafe==2.0.1
- mccabe==0.7.0
- numpy==1.19.5
- pycodestyle==2.9.1
- pyflakes==2.5.0
- pygments==2.14.0
- pytest-mock==3.6.1
- pytz==2025.2
- requests==2.27.1
- snowballstemmer==2.2.0
- sphinx==4.3.2
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- urllib3==1.26.20
prefix: /opt/conda/envs/zerial
| [
"tests/test_numpy.py::test_restructured_array_is_writable"
] | [] | [
"tests/test_numpy.py::test_encoder_barfs_on_refcounted_python_objects_in_array",
"tests/test_numpy.py::test_encode_decode_stability[arr0]",
"tests/test_numpy.py::test_encode_decode_stability[arr1]",
"tests/test_numpy.py::test_encode_decode_stability[arr2]",
"tests/test_numpy.py::test_encode_decode_stability[arr3]",
"tests/test_numpy.py::test_encodes_toplevel_when_given_as_type",
"tests/test_numpy.py::test_encodes_toplevel_fails_when_not_given_as_type",
"tests/test_numpy.py::test_can_encode_structured_ndarray"
] | [] | MIT License | 5,776 | 316 | [
"src/zerial/numpy.py"
] |
|
joke2k__faker-1046 | c284aa604b0fb931bdb7533714fe1ee386b8cedc | 2019-11-09 18:31:05 | 4b7a1898c3d86cafeca18d08ad612da475bdbe26 | fcurella: Thank you so much ✨ | diff --git a/faker/providers/isbn/__init__.py b/faker/providers/isbn/__init__.py
index ada84898..2d6ace9c 100644
--- a/faker/providers/isbn/__init__.py
+++ b/faker/providers/isbn/__init__.py
@@ -52,7 +52,7 @@ class Provider(BaseProvider):
:returns: A (registrant, publication) tuple of strings.
"""
for rule in rules:
- if rule.min <= reg_pub <= rule.max:
+ if rule.min <= reg_pub[:-1] <= rule.max:
reg_len = rule.registrant_length
break
else:
| Fake ISBN10 causes "Registrant/Publication not found"
In rare cases the `fake.isbn10` method throws an exception with the following message: `Exception: Registrant/Publication not found in registrant rule list. `
A full exception message:
```
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:70: in isbn10
ean, group, registrant, publication = self._body()
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:41: in _body
registrant, publication = self._registrant_publication(reg_pub, rules)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
reg_pub = '64799998'
rules = [RegistrantRule(min='0000000', max='1999999', registrant_length=2), RegistrantRule(min='2000000', max='2279999', regis...'6480000', max='6489999', registrant_length=7), RegistrantRule(min='6490000', max='6999999', registrant_length=3), ...]
@staticmethod
def _registrant_publication(reg_pub, rules):
""" Separate the registration from the publication in a given
string.
:param reg_pub: A string of digits representing a registration
and publication.
:param rules: A list of RegistrantRules which designate where
to separate the values in the string.
:returns: A (registrant, publication) tuple of strings.
"""
for rule in rules:
if rule.min <= reg_pub <= rule.max:
reg_len = rule.registrant_length
break
else:
> raise Exception('Registrant/Publication not found in registrant '
'rule list.')
E Exception: Registrant/Publication not found in registrant rule list.
/usr/local/lib/python3.6/site-packages/faker/providers/isbn/__init__.py:59: Exception
```
### Steps to reproduce
Call `faker.providers.isbn.Provider._registrant_publication` with any of the following values for the `reg_pub` param: `64799998`, `39999999`. These values are valid randomly generated strings from [L34](https://github.com/joke2k/faker/blob/master/faker/providers/isbn/__init__.py#L37).
Code:
```python
from faker.providers.isbn import Provider
from faker.providers.isbn.rules import RULES
# Fails; throws an exception
Provider._registrant_publication('64799998', RULES['978']['0'])
Provider._registrant_publication('39999999', RULES['978']['1'])
# Works; but may be invalid
Provider._registrant_publication('64799998', RULES['978']['1'])
Provider._registrant_publication('39999999', RULES['978']['0'])
```
### Expected behavior
The `faker.providers.isbn.Provider._body` should generate valid `reg_pub` values.
### Actual behavior
It generates values for `reg_pub` that are not accepted by the rules defined in `faker.providers.isbn.rules`.
| joke2k/faker | diff --git a/tests/providers/test_isbn.py b/tests/providers/test_isbn.py
index 4d6fab06..fe01c4aa 100644
--- a/tests/providers/test_isbn.py
+++ b/tests/providers/test_isbn.py
@@ -43,8 +43,13 @@ class TestProvider(unittest.TestCase):
def test_reg_pub_separation(self):
r1 = RegistrantRule('0000000', '0000001', 1)
r2 = RegistrantRule('0000002', '0000003', 2)
- assert self.prov._registrant_publication('0000000', [r1, r2]) == ('0', '000000')
- assert self.prov._registrant_publication('0000002', [r1, r2]) == ('00', '00002')
+ assert self.prov._registrant_publication('00000000', [r1, r2]) == ('0', '0000000')
+ assert self.prov._registrant_publication('00000010', [r1, r2]) == ('0', '0000010')
+ assert self.prov._registrant_publication('00000019', [r1, r2]) == ('0', '0000019')
+ assert self.prov._registrant_publication('00000020', [r1, r2]) == ('00', '000020')
+ assert self.prov._registrant_publication('00000030', [r1, r2]) == ('00', '000030')
+ assert self.prov._registrant_publication('00000031', [r1, r2]) == ('00', '000031')
+ assert self.prov._registrant_publication('00000039', [r1, r2]) == ('00', '000039')
def test_rule_not_found(self):
with pytest.raises(Exception):
| {
"commit_name": "merge_commit",
"failed_lite_validators": [],
"has_test_patch": true,
"is_lite": true,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 1
},
"num_modified_files": 1
} | 2.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"coverage"
],
"pre_install": null,
"python": "3.9",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | coverage==7.8.0
exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work
-e git+https://github.com/joke2k/faker.git@c284aa604b0fb931bdb7533714fe1ee386b8cedc#egg=Faker
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1734472117206/work
pluggy @ file:///croot/pluggy_1733169602837/work
pytest @ file:///croot/pytest_1738938843180/work
python-dateutil==2.9.0.post0
six==1.17.0
text-unidecode==1.3
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
| name: faker
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- exceptiongroup=1.2.0=py39h06a4308_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- packaging=24.2=py39h06a4308_0
- pip=25.0=py39h06a4308_0
- pluggy=1.5.0=py39h06a4308_0
- pytest=8.3.4=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py39h06a4308_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.8.0
- python-dateutil==2.9.0.post0
- six==1.17.0
- text-unidecode==1.3
prefix: /opt/conda/envs/faker
| [
"tests/providers/test_isbn.py::TestProvider::test_reg_pub_separation"
] | [] | [
"tests/providers/test_isbn.py::TestISBN10::test_check_digit_is_correct",
"tests/providers/test_isbn.py::TestISBN10::test_format_length",
"tests/providers/test_isbn.py::TestISBN13::test_check_digit_is_correct",
"tests/providers/test_isbn.py::TestISBN13::test_format_length",
"tests/providers/test_isbn.py::TestProvider::test_rule_not_found"
] | [] | MIT License | 5,781 | 153 | [
"faker/providers/isbn/__init__.py"
] |
joke2k__faker-1047 | c284aa604b0fb931bdb7533714fe1ee386b8cedc | 2019-11-11 12:24:39 | 4b7a1898c3d86cafeca18d08ad612da475bdbe26 | diff --git a/faker/generator.py b/faker/generator.py
index 40135a1b..b66b89bb 100644
--- a/faker/generator.py
+++ b/faker/generator.py
@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import random as random_module
import re
+import six
_re_token = re.compile(r'\{\{(\s?)(\w+)(\s?)\}\}')
random = random_module.Random()
@@ -108,5 +109,5 @@ class Generator(object):
def __format_token(self, matches):
formatter = list(matches.groups())
- formatter[1] = self.format(formatter[1])
+ formatter[1] = six.text_type(self.format(formatter[1]))
return ''.join(formatter)
diff --git a/faker/providers/isbn/__init__.py b/faker/providers/isbn/__init__.py
index ada84898..2d6ace9c 100644
--- a/faker/providers/isbn/__init__.py
+++ b/faker/providers/isbn/__init__.py
@@ -52,7 +52,7 @@ class Provider(BaseProvider):
:returns: A (registrant, publication) tuple of strings.
"""
for rule in rules:
- if rule.min <= reg_pub <= rule.max:
+ if rule.min <= reg_pub[:-1] <= rule.max:
reg_len = rule.registrant_length
break
else:
diff --git a/faker/providers/phone_number/fa_IR/__init__.py b/faker/providers/phone_number/fa_IR/__init__.py
index 07f60d0b..ed6a1acb 100644
--- a/faker/providers/phone_number/fa_IR/__init__.py
+++ b/faker/providers/phone_number/fa_IR/__init__.py
@@ -5,14 +5,34 @@ from .. import Provider as PhoneNumberProvider
class Provider(PhoneNumberProvider):
formats = (
# Mobile
+ # Mci
'+98 91# ### ####',
'091# ### ####',
+ '+98 990 ### ####',
+ '0990 ### ####',
+ '+98 991 ### ####',
+ '0991 ### ####',
+ # Rightel Mobile prefixes
'+98 920 ### ####',
'0920 ### ####',
'+98 921 ### ####',
'0921 ### ####',
+ '+98 922 ### ####',
+ '0922 ### ####',
+ # Samantel Mobile prefixes
+ '+98 999 ### ####',
+ '0999 ### ####',
+ # Mtn and Talia
'+98 93# ### ####',
'093# ### ####',
+ '+98 901 ### ####',
+ '0901 ### ####',
+ '+98 902 ### ####',
+ '902 ### ####',
+ '+98 903 ### ####',
+ '0903 ### ####',
+ '+98 905 ### ####',
+ '0905 ### ####',
# Land lines,
# https://en.wikipedia.org/wiki/List_of_dialling_codes_in_Iran
'+98 21 #### ####',
diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py
index eb8a70a6..36317627 100644
--- a/faker/providers/python/__init__.py
+++ b/faker/providers/python/__init__.py
@@ -3,6 +3,7 @@
from __future__ import unicode_literals
from decimal import Decimal
+import string
import sys
import six
@@ -32,6 +33,9 @@ class Provider(BaseProvider):
),
)
+ def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):
+ return self.bothify(self.generator.parse(string_format), letters=letters)
+
def pyfloat(self, left_digits=None, right_digits=None, positive=False,
min_value=None, max_value=None):
| Would a faker.format() method make sense?
It could enable the syntax:
`print(faker.format('{name} considers a {catch_phrase} in {country}'))`
A sample implementation show how simple it would be:
``` python
def fake_fmt(fmt="{first_name}'s favorite number: {random_int}", fake=None):
fake = fake or Faker()
fields = [fld.split('}')[0].split(':')[0] for fld in fmt.split('{')[1:]]
return fmt.format(**{field: getattr(fake, field)() for field in fields})
```
More test cases at: https://github.com/cclauss/Ten-lines-or-less/blob/master/fake_format.py
| joke2k/faker | diff --git a/tests/providers/test_isbn.py b/tests/providers/test_isbn.py
index 4d6fab06..fe01c4aa 100644
--- a/tests/providers/test_isbn.py
+++ b/tests/providers/test_isbn.py
@@ -43,8 +43,13 @@ class TestProvider(unittest.TestCase):
def test_reg_pub_separation(self):
r1 = RegistrantRule('0000000', '0000001', 1)
r2 = RegistrantRule('0000002', '0000003', 2)
- assert self.prov._registrant_publication('0000000', [r1, r2]) == ('0', '000000')
- assert self.prov._registrant_publication('0000002', [r1, r2]) == ('00', '00002')
+ assert self.prov._registrant_publication('00000000', [r1, r2]) == ('0', '0000000')
+ assert self.prov._registrant_publication('00000010', [r1, r2]) == ('0', '0000010')
+ assert self.prov._registrant_publication('00000019', [r1, r2]) == ('0', '0000019')
+ assert self.prov._registrant_publication('00000020', [r1, r2]) == ('00', '000020')
+ assert self.prov._registrant_publication('00000030', [r1, r2]) == ('00', '000030')
+ assert self.prov._registrant_publication('00000031', [r1, r2]) == ('00', '000031')
+ assert self.prov._registrant_publication('00000039', [r1, r2]) == ('00', '000039')
def test_rule_not_found(self):
with pytest.raises(Exception):
diff --git a/tests/providers/test_python.py b/tests/providers/test_python.py
index 61f5ca02..ccb93c79 100644
--- a/tests/providers/test_python.py
+++ b/tests/providers/test_python.py
@@ -1,6 +1,10 @@
# -*- coding: utf-8 -*-
import unittest
+try:
+ from unittest.mock import patch
+except ImportError:
+ from mock import patch
from faker import Faker
@@ -87,3 +91,19 @@ class TestPyfloat(unittest.TestCase):
message = str(raises.exception)
self.assertEqual(message, expected_message)
+
+
+class TestPystrFormat(unittest.TestCase):
+
+ def setUp(self):
+ self.factory = Faker(includes=['tests.mymodule.en_US'])
+
+ def test_formatter_invocation(self):
+ with patch.object(self.factory, 'foo') as mock_foo:
+ with patch('faker.providers.BaseProvider.bothify',
+ wraps=self.factory.bothify) as mock_bothify:
+ mock_foo.return_value = 'barbar'
+ value = self.factory.pystr_format('{{foo}}?#?{{foo}}?#?{{foo}}', letters='abcde')
+ assert value.count('barbar') == 3
+ assert mock_foo.call_count == 3
+ mock_bothify.assert_called_once_with('barbar?#?barbar?#?barbar', letters='abcde')
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 3
},
"num_modified_files": 4
} | 2.0 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-xdist",
"pytest-mock",
"pytest-asyncio"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
coverage==7.2.7
execnet==2.0.2
-e git+https://github.com/joke2k/faker.git@c284aa604b0fb931bdb7533714fe1ee386b8cedc#egg=Faker
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1648562407465/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pytest==7.1.2
pytest-asyncio==0.21.2
pytest-cov==4.1.0
pytest-mock==3.11.1
pytest-xdist==3.5.0
python-dateutil==2.9.0.post0
six==1.17.0
text-unidecode==1.3
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
zipp @ file:///croot/zipp_1672387121353/work
| name: faker
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib-metadata=4.11.3=py37h06a4308_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
- pip:
- coverage==7.2.7
- execnet==2.0.2
- pytest-asyncio==0.21.2
- pytest-cov==4.1.0
- pytest-mock==3.11.1
- pytest-xdist==3.5.0
- python-dateutil==2.9.0.post0
- six==1.17.0
- text-unidecode==1.3
prefix: /opt/conda/envs/faker
| [
"tests/providers/test_isbn.py::TestProvider::test_reg_pub_separation",
"tests/providers/test_python.py::TestPystrFormat::test_formatter_invocation"
] | [] | [
"tests/providers/test_isbn.py::TestISBN10::test_check_digit_is_correct",
"tests/providers/test_isbn.py::TestISBN10::test_format_length",
"tests/providers/test_isbn.py::TestISBN13::test_check_digit_is_correct",
"tests/providers/test_isbn.py::TestISBN13::test_format_length",
"tests/providers/test_isbn.py::TestProvider::test_rule_not_found",
"tests/providers/test_python.py::TestPyint::test_pyint",
"tests/providers/test_python.py::TestPyint::test_pyint_bound_0",
"tests/providers/test_python.py::TestPyint::test_pyint_bound_negative",
"tests/providers/test_python.py::TestPyint::test_pyint_bound_positive",
"tests/providers/test_python.py::TestPyint::test_pyint_bounds",
"tests/providers/test_python.py::TestPyint::test_pyint_range",
"tests/providers/test_python.py::TestPyint::test_pyint_step",
"tests/providers/test_python.py::TestPyfloat::test_left_digits",
"tests/providers/test_python.py::TestPyfloat::test_max_value",
"tests/providers/test_python.py::TestPyfloat::test_max_value_should_be_greater_than_min_value",
"tests/providers/test_python.py::TestPyfloat::test_min_value",
"tests/providers/test_python.py::TestPyfloat::test_positive",
"tests/providers/test_python.py::TestPyfloat::test_pyfloat",
"tests/providers/test_python.py::TestPyfloat::test_right_digits"
] | [] | MIT License | 5,788 | 978 | [
"faker/generator.py",
"faker/providers/isbn/__init__.py",
"faker/providers/phone_number/fa_IR/__init__.py",
"faker/providers/python/__init__.py"
] |
|
mikedh__trimesh-609 | db0f8e7c6a3c0a3b027a93c15a33a8cc25c7f5b2 | 2019-11-11 18:13:35 | 592d8476a232bac8a8b4e63f3d4dbf3b71476192 | diff --git a/trimesh/geometry.py b/trimesh/geometry.py
index 10d10d29..6db8533d 100644
--- a/trimesh/geometry.py
+++ b/trimesh/geometry.py
@@ -1,7 +1,7 @@
import numpy as np
from . import util
-from .constants import tol, log
+from .constants import log
try:
import scipy.sparse
@@ -72,34 +72,32 @@ def align_vectors(a, b, return_angle=False):
# projection of a onto b
dot = np.dot(a, b)
-
- # are vectors just reversed
- if dot < (tol.merge - 1):
+ # resolution to compare floating point numbers
+ epsilon = 1e-12
+ if dot < (epsilon - 1):
# a reversed vector is 180 degrees
angle = np.pi
-
- # https://github.com/mikedh/trimesh/issues/540
- svd_a = np.linalg.svd(a[:, np.newaxis])[0]
- svd_b = np.linalg.svd(b[:, np.newaxis])[0]
- rotation = svd_b.dot(svd_a.T)
-
- # are vectors already the same
- elif dot > (1 - tol.merge):
+ # get an arbitrary perpendicular vector
+ # note that we are using both a and b
+ # so small values will be halved
+ perp = util.generate_basis(a - b)[0]
+ # compose the rotation matrix around our
+ # perpendicular vector with a simplification since
+ # cos(pi)=-1 and sin(pi)=0
+ rotation = np.outer(perp, perp) * 2.0 - np.eye(3)
+ elif dot > (1 - epsilon):
+ # are vectors already the same
angle = 0.0
# no rotation
rotation = np.eye(3)
-
# vectors are at some angle to each other
else:
# we already handled values out of the range [-1.0, 1.0]
angle = np.arccos(dot)
-
# (3,) vector perpendicular to both a and b
w = np.cross(a, b)
-
# a float between 0.5 and 1.0
c = 1.0 / (1.0 + dot)
-
# (3, 3) skew- symmetric matrix from the (3,) vector w
# the matrix has the property: wx == -wx.T
wx = np.array([[0, -w[2], w[1]],
diff --git a/trimesh/util.py b/trimesh/util.py
index eedbe178..48ddaf05 100644
--- a/trimesh/util.py
+++ b/trimesh/util.py
@@ -60,6 +60,8 @@ log = logging.getLogger('trimesh')
TOL_ZERO = np.finfo(np.float64).resolution * 100
# how close to merge vertices
TOL_MERGE = 1e-8
+# enable additional potentially slow checks
+_STRICT = False
def unitize(vectors,
@@ -2000,18 +2002,28 @@ def generate_basis(z):
# X as arbitrary perpendicular vector
x = np.array([-z[1], z[0], 0.0])
# avoid degenerate case
- if np.isclose(np.linalg.norm(x), 0.0):
- # Z is already along Z [0, 0, 1]
+ x_norm = np.linalg.norm(x)
+ if x_norm < 1e-12:
+ # this means that
# so a perpendicular X is just X
- x = np.array([1.0, 0.0, 0.0])
+ x = np.array([-z[2], z[1], 0.0])
+ x /= np.linalg.norm(x)
else:
# otherwise normalize X in- place
- x /= np.linalg.norm(x)
+ x /= x_norm
# get perpendicular Y with cross product
y = np.cross(z, x)
- # append result values into vector
+ # append result values into (3, 3) vector
result = np.array([x, y, z], dtype=np.float64)
+ if _STRICT:
+ # run checks to make sure axis are perpendicular
+ assert np.abs(np.dot(x, z)) < 1e-8
+ assert np.abs(np.dot(y, z)) < 1e-8
+ assert np.abs(np.dot(x, y)) < 1e-8
+ # all vectors should be unit vector
+ assert np.allclose(np.linalg.norm(result, axis=1), 1.0)
+
return result
@@ -2039,6 +2051,7 @@ def isclose(a, b, atol):
"""
diff = a - b
close = np.logical_and(diff > -atol, diff < atol)
+
return close
diff --git a/trimesh/version.py b/trimesh/version.py
index f6310076..da4564dd 100644
--- a/trimesh/version.py
+++ b/trimesh/version.py
@@ -1,1 +1,1 @@
-__version__ = '3.4.0'
+__version__ = '3.4.1'
| trimesh.geometry.align_vectors returns invalid rigid transform
Related to #540, it seems like there is still an issue where the method returns an invalid rigid transform when passed input that is close to the reverse vector. An example:
```python
In [1]: import trimesh
In [2]: import numpy as np
In [3]: a = trimesh.geometry.align_vectors([0,0,-1], [0,0,1])
In [4]: b = trimesh.geometry.align_vectors([0,0,-1], [-1e-17, 1e-17, 1])
In [5]: np.linalg.det(a)
Out[5]: 1.0
In [6]: np.linalg.det(b)
Out[6]: -1.0
```
Note that this still occurs with inputs that aren't quite as small (like 1e-4):
```python
In [7]: b = trimesh.geometry.align_vectors([0,0,-1], [-1e-4, 1e-4, 1])
In [8]: np.linalg.det(b)
Out[8]: -0.9999999999999999
```
It does work for the case in #540 though (the same one in the unit tests):
```python
In [9]: v1 = np.array([ 7.12106798e-07, -7.43194705e-08, 1.00000000e+00])
In [10]: c = trimesh.geometry.align_vectors([0,0,-1], v1)
In [11]: np.linalg.det(c)
Out[11]: 1.0000000000000002
```
Any thoughts on this? I'm not sure that I 100% understand the SVD trick. | mikedh/trimesh | diff --git a/tests/generic.py b/tests/generic.py
index b5d79d2b..091e0f2c 100644
--- a/tests/generic.py
+++ b/tests/generic.py
@@ -45,8 +45,9 @@ from collections import deque
from trimesh.constants import tol, tol_path
from trimesh.base import Trimesh
-# make sure everyone knows they should run additional
-# validation checks and raise exceptions
+# make sure functions know they should run additional
+# potentially slow validation checks and raise exceptions
+trimesh.util._STRICT = True
trimesh.constants.tol.strict = True
trimesh.constants.tol_path.strict = True
diff --git a/tests/test_align.py b/tests/test_align.py
new file mode 100644
index 00000000..0b0e3639
--- /dev/null
+++ b/tests/test_align.py
@@ -0,0 +1,95 @@
+try:
+ from . import generic as g
+except BaseException:
+ import generic as g
+
+tol_norm = 1e-6
+
+
+class AlignTests(g.unittest.TestCase):
+
+ def test_align(self):
+ """
+ Test aligning two 3D vectors
+ """
+
+ # function we're testing
+ align = g.trimesh.geometry.align_vectors
+ is_rigid = g.trimesh.transformations.is_rigid
+
+ # start with some edge cases and make sure the transform works
+ target = g.np.array([0, 0, -1], dtype=g.np.float64)
+ vectors = g.np.vstack((
+ g.trimesh.unitize(g.np.random.random((1000, 3)) - .5),
+ [-target, target],
+ g.trimesh.util.generate_basis(target),
+ [[7.12106798e-07, -7.43194705e-08, 1.00000000e+00],
+ [0, 0, -1],
+ [1e-4, 1e-4, -1]]))
+
+ for vector in vectors:
+ T, a = align(vector, target, return_angle=True)
+ assert is_rigid(T)
+ assert g.np.isclose(g.np.linalg.det(T), 1.0)
+ # rotate vector with transform
+ check = g.np.dot(T[:3, :3], vector)
+ # compare to target vector
+ norm = g.np.linalg.norm(check - target)
+ assert norm < tol_norm
+
+ # these vectors should be perpendicular and zero
+ angles = [align(i, target, return_angle=True)[1]
+ for i in g.trimesh.util.generate_basis(target)]
+ assert g.np.allclose(
+ angles, [g.np.pi / 2, g.np.pi / 2, 0.0])
+
+ def test_range(self):
+ # function we're testing
+ align = g.trimesh.geometry.align_vectors
+ is_rigid = g.trimesh.transformations.is_rigid
+
+ # generate angles from 0 to 180 degrees
+ angles = g.np.linspace(0.0, g.np.pi / 1e7, 10000)
+ # generate on- plane vectors
+ vectors = g.np.column_stack((g.np.cos(angles),
+ g.np.sin(angles),
+ g.np.zeros(len(angles))))
+
+ # rotate them arbitrarily off the plane just for funsies
+ vectors = g.trimesh.transform_points(
+ vectors, g.transforms[20])
+
+ for angle, vector in zip(angles, vectors):
+ g.trimesh.util.generate_basis(vector)
+ # check alignment to first vector
+ # which was created with zero angle
+ T, a = align(vector, vectors[0], return_angle=True)
+ assert is_rigid(T)
+ # check to make sure returned angle corresponds with truth
+
+ assert g.np.isclose(a, angle, atol=1e-6)
+
+ # check to make sure returned transform is correct
+ check = g.np.dot(T[:3, :3], vector)
+ norm = g.np.linalg.norm(check - vectors[0])
+
+ assert norm < tol_norm
+
+ def test_rigid(self):
+ # check issues with near-reversed vectors not returning rigid
+ align = g.trimesh.geometry.align_vectors
+ T = align([0, 0, -1], [-1e-17, 1e-17, 1])
+ assert g.np.isclose(g.np.linalg.det(T), 1.0)
+
+ T = align([0, 0, -1], [-1e-4, 1e-4, 1])
+ assert g.np.isclose(g.np.linalg.det(T), 1.0)
+
+ vector_1 = g.np.array([7.12106798e-07, -7.43194705e-08, 1.00000000e+00])
+ vector_2 = g.np.array([0, 0, -1])
+ T, angle = align(vector_1, vector_2, return_angle=True)
+ assert g.np.isclose(g.np.linalg.det(T), 1.0)
+
+
+if __name__ == '__main__':
+ g.trimesh.util.attach_to_log()
+ g.unittest.main()
diff --git a/tests/test_vector.py b/tests/test_vector.py
index 87594b33..ff356adc 100644
--- a/tests/test_vector.py
+++ b/tests/test_vector.py
@@ -46,64 +46,6 @@ class HemisphereTests(g.unittest.TestCase):
assert g.np.allclose(v, a * s.reshape((-1, 1)))
-class AlignTests(g.unittest.TestCase):
-
- def test_align(self):
- """
- Test aligning two 3D vectors
- """
-
- # function we're testing
- align = g.trimesh.geometry.align_vectors
- is_rigid = g.trimesh.transformations.is_rigid
-
- # start with some edge cases and make sure the transform works
- target = g.np.array([0, 0, -1], dtype=g.np.float64)
- vectors = g.np.vstack((g.trimesh.unitize(g.np.random.random((1000, 3)) - .5),
- [-target, target],
- g.trimesh.util.generate_basis(target),
- [[7.12106798e-07, -7.43194705e-08, 1.00000000e+00],
- [0, 0, -1]]))
- for vector in vectors:
- T, a = align(vector, target, return_angle=True)
- assert is_rigid(T)
- # rotate vector with transform
- check = g.np.dot(T[:3, :3], vector)
- # compare to target vector
- norm = g.np.linalg.norm(check - target)
- assert norm < 1e-8
-
- # these vectors should be perpendicular and zero
- angles = [align(i, target, return_angle=True)[1]
- for i in g.trimesh.util.generate_basis(target)]
- assert g.np.allclose(angles, [g.np.pi / 2, g.np.pi / 2, 0.0])
-
- # generate angles from 0 to 180 degrees
- angles = g.np.linspace(0.0, g.np.pi, 1000)
- # generate on- plane vectors
- vectors = g.np.column_stack((g.np.cos(angles),
- g.np.sin(angles),
- g.np.zeros(len(angles))))
-
- # rotate them arbitrarily off the plane just for funsies
- vectors = g.trimesh.transform_points(vectors,
- g.transforms[20])
-
- for angle, vector in zip(angles, vectors):
- # check alignment to first vector
- # which was created with zero angle
- T, a = align(vector, vectors[0], return_angle=True)
- assert is_rigid(T)
- # check to make sure returned angle corresponds with truth
- assert g.np.isclose(a, angle)
-
- # check to make sure returned transform is correct
- check = g.np.dot(T, g.np.append(vector, 1))[:3]
- norm = g.np.linalg.norm(check - vectors[0])
-
- assert norm < 1e-8
-
-
if __name__ == '__main__':
g.trimesh.util.attach_to_log()
g.unittest.main()
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_issue_reference",
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 3
} | 3.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[all]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov"
],
"pre_install": [
"apt-get update",
"apt-get install -y openscad blender"
],
"python": "3.6",
"reqs_path": [
"requirements/base.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==22.2.0
autoprop==4.1.0
backports.cached-property==1.0.2
certifi==2021.5.30
chardet==5.0.0
charset-normalizer==2.0.12
colorlog==6.9.0
coverage==6.2
cycler==0.11.0
Cython==3.0.12
decorator==4.4.2
glooey==0.3.6
idna==3.10
imageio==2.15.0
importlib-metadata==4.8.3
iniconfig==1.1.1
jsonschema==3.2.0
kiwisolver==1.3.1
lxml==5.3.1
matplotlib==3.3.4
more-itertools==8.14.0
mpmath==1.3.0
msgpack==1.0.5
networkx==2.5.1
numpy==1.19.5
packaging==21.3
Pillow==8.4.0
pluggy==1.0.0
psutil==7.0.0
py==1.11.0
pycollada==0.8
pyglet==2.0.10
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
pytest-cov==4.0.0
python-dateutil==2.9.0.post0
python-fcl==0.7.0.5
PyWavelets==1.1.1
PyYAML==6.0.1
requests==2.27.1
Rtree==0.9.7
scikit-image==0.17.2
scipy==1.5.4
Shapely==1.8.5.post1
signature_dispatch==1.0.0
six==1.17.0
svg.path==6.2
sympy==1.9
tifffile==2020.9.3
tomli==1.2.3
triangle==20220202
-e git+https://github.com/mikedh/trimesh.git@db0f8e7c6a3c0a3b027a93c15a33a8cc25c7f5b2#egg=trimesh
typeguard==2.13.3
typing==3.7.4.3
typing_extensions==4.1.1
urllib3==1.26.20
vecrec==0.3.1
xxhash==3.2.0
zipp==3.6.0
| name: trimesh
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==22.2.0
- autoprop==4.1.0
- backports-cached-property==1.0.2
- chardet==5.0.0
- charset-normalizer==2.0.12
- colorlog==6.9.0
- coverage==6.2
- cycler==0.11.0
- cython==3.0.12
- decorator==4.4.2
- glooey==0.3.6
- idna==3.10
- imageio==2.15.0
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jsonschema==3.2.0
- kiwisolver==1.3.1
- lxml==5.3.1
- matplotlib==3.3.4
- more-itertools==8.14.0
- mpmath==1.3.0
- msgpack==1.0.5
- networkx==2.5.1
- numpy==1.19.5
- packaging==21.3
- pillow==8.4.0
- pluggy==1.0.0
- psutil==7.0.0
- py==1.11.0
- pycollada==0.8
- pyglet==2.0.10
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- pytest-cov==4.0.0
- python-dateutil==2.9.0.post0
- python-fcl==0.7.0.5
- pywavelets==1.1.1
- pyyaml==6.0.1
- requests==2.27.1
- rtree==0.9.7
- scikit-image==0.17.2
- scipy==1.5.4
- shapely==1.8.5.post1
- signature-dispatch==1.0.0
- six==1.17.0
- svg-path==6.2
- sympy==1.9
- tifffile==2020.9.3
- tomli==1.2.3
- triangle==20220202
- typeguard==2.13.3
- typing==3.7.4.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- vecrec==0.3.1
- xxhash==3.2.0
- zipp==3.6.0
prefix: /opt/conda/envs/trimesh
| [
"tests/test_align.py::AlignTests::test_align",
"tests/test_align.py::AlignTests::test_rigid"
] | [] | [
"tests/test_align.py::AlignTests::test_range",
"tests/test_vector.py::SphericalTests::test_spherical",
"tests/test_vector.py::HemisphereTests::test_hemisphere"
] | [] | MIT License | 5,790 | 1,257 | [
"trimesh/geometry.py",
"trimesh/util.py",
"trimesh/version.py"
] |
|
J-CPelletier__webcomix-20 | 9476ff995ba90d3b810c4f38e5337b868ca09c97 | 2019-11-13 17:53:40 | a256a9f48f035e927b8f015493581f0fc568dc25 | diff --git a/webcomix/comic.py b/webcomix/comic.py
index 54bb578..e135208 100644
--- a/webcomix/comic.py
+++ b/webcomix/comic.py
@@ -1,5 +1,6 @@
import os
from typing import List, Mapping
+from urllib.parse import urlparse
from zipfile import ZipFile, BadZipFile
import click
@@ -153,10 +154,13 @@ class Comic:
if url.count(".") <= 1:
# No file extension (only dot in url is domain name)
return str(page)
- elif title_present:
- return "{}-{}{}".format(comic_name, page, url[url.rindex(".") :])
+
+ parsed_filepath = urlparse(url).path
+ file_extension = parsed_filepath[parsed_filepath.rindex(".") :]
+ if title_present:
+ return "{}-{}{}".format(comic_name, page, file_extension)
else:
- return "{}{}".format(page, url[url.rindex(".") :])
+ return "{}{}".format(page, file_extension)
@staticmethod
def save_alt_text_location(page: int, directory_name: str = "") -> str:
diff --git a/webcomix/supported_comics.py b/webcomix/supported_comics.py
index b4e76e7..8c62ded 100644
--- a/webcomix/supported_comics.py
+++ b/webcomix/supported_comics.py
@@ -75,4 +75,9 @@ supported_comics = {
"//img[@class='comicnormal']/@src",
"//a[img[contains(@src, 'next')]]/@href",
),
+ "SchlockMercenary": (
+ "https://www.schlockmercenary.com/2000-06-12",
+ "//div[@class='strip-images']//img/@src",
+ "//a[@class='next-strip']/@href",
+ ),
}
| Enhancement Schlock Mercenary
Is it possible to add https://www.schlockmercenary.com/ to the list of supported comics? | J-CPelletier/webcomix | diff --git a/webcomix/tests/test_comic.py b/webcomix/tests/test_comic.py
index f11a01c..7fdb46c 100644
--- a/webcomix/tests/test_comic.py
+++ b/webcomix/tests/test_comic.py
@@ -66,6 +66,12 @@ def test_save_image_location():
== "foo/1.jpg"
)
assert Comic.save_image_location("", 1, "bar") == "bar/1"
+ assert (
+ Comic.save_image_location(
+ "http://imgs.xkcd.com/comics/barrel_cropped_(1).jpg?q=123", 1, "foo"
+ )
+ == "foo/1.jpg"
+ )
def test_save_image_filename_with_title_present():
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_short_problem_statement",
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 0
},
"num_modified_files": 2
} | 3.2 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest",
"pytest-cov",
"pytest-mock"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.9",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs==25.3.0
Automat==24.8.1
certifi==2025.1.31
cffi==1.17.1
cfgv==3.4.0
charset-normalizer==3.4.1
click==8.1.8
constantly==23.10.4
coverage==7.8.0
coveralls==4.0.1
cryptography==44.0.2
cssselect==1.3.0
defusedxml==0.7.1
distlib==0.3.9
docopt==0.6.2
exceptiongroup==1.2.2
fake-useragent==2.1.0
filelock==3.18.0
hyperlink==21.0.0
identify==2.6.9
idna==3.10
importlib_resources==6.5.2
incremental==24.7.2
iniconfig==2.1.0
itemadapter==0.11.0
itemloaders==1.3.2
jmespath==1.0.1
lxml==5.3.1
mypy==1.15.0
mypy-extensions==1.0.0
nodeenv==1.9.1
packaging==24.2
parsel==1.10.0
platformdirs==4.3.7
pluggy==1.5.0
pre_commit==4.2.0
Protego==0.4.0
pyasn1==0.6.1
pyasn1_modules==0.4.2
pycparser==2.22
PyDispatcher==2.0.7
pyOpenSSL==25.0.0
pytest==8.3.5
pytest-cov==2.5.1
pytest-mock==3.14.0
PyYAML==6.0.2
queuelib==1.8.0
requests==2.32.3
requests-file==2.1.0
Scrapy==2.12.0
scrapy-splash==0.11.1
service-identity==24.2.0
six==1.17.0
tldextract==5.1.3
tomli==2.2.1
tqdm==4.67.1
Twisted==24.11.0
typing_extensions==4.13.0
urllib3==2.3.0
virtualenv==20.29.3
w3lib==2.3.1
-e git+https://github.com/J-CPelletier/webcomix.git@9476ff995ba90d3b810c4f38e5337b868ca09c97#egg=webcomix
zipp==3.21.0
zope.interface==7.2
| name: webcomix
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=25.0=py39h06a4308_0
- python=3.9.21=he870216_1
- readline=8.2=h5eee18b_0
- setuptools=75.8.0=py39h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tzdata=2025a=h04d1e81_0
- wheel=0.45.1=py39h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- attrs==25.3.0
- automat==24.8.1
- certifi==2025.1.31
- cffi==1.17.1
- cfgv==3.4.0
- charset-normalizer==3.4.1
- click==8.1.8
- constantly==23.10.4
- coverage==7.8.0
- coveralls==4.0.1
- cryptography==44.0.2
- cssselect==1.3.0
- defusedxml==0.7.1
- distlib==0.3.9
- docopt==0.6.2
- exceptiongroup==1.2.2
- fake-useragent==2.1.0
- filelock==3.18.0
- hyperlink==21.0.0
- identify==2.6.9
- idna==3.10
- importlib-resources==6.5.2
- incremental==24.7.2
- iniconfig==2.1.0
- itemadapter==0.11.0
- itemloaders==1.3.2
- jmespath==1.0.1
- lxml==5.3.1
- mypy==1.15.0
- mypy-extensions==1.0.0
- nodeenv==1.9.1
- packaging==24.2
- parsel==1.10.0
- platformdirs==4.3.7
- pluggy==1.5.0
- pre-commit==4.2.0
- protego==0.4.0
- pyasn1==0.6.1
- pyasn1-modules==0.4.2
- pycparser==2.22
- pydispatcher==2.0.7
- pyopenssl==25.0.0
- pytest==8.3.5
- pytest-cov==2.5.1
- pytest-mock==3.14.0
- pyyaml==6.0.2
- queuelib==1.8.0
- requests==2.32.3
- requests-file==2.1.0
- scrapy==2.12.0
- scrapy-splash==0.11.1
- service-identity==24.2.0
- six==1.17.0
- tldextract==5.1.3
- tomli==2.2.1
- tqdm==4.67.1
- twisted==24.11.0
- typing-extensions==4.13.0
- urllib3==2.3.0
- virtualenv==20.29.3
- w3lib==2.3.1
- zipp==3.21.0
- zope-interface==7.2
prefix: /opt/conda/envs/webcomix
| [
"webcomix/tests/test_comic.py::test_save_image_location"
] | [
"webcomix/tests/test_comic.py::test_download_saves_the_files",
"webcomix/tests/test_comic.py::test_download_with_alt_text_saves_the_text",
"webcomix/tests/test_comic.py::test_convert_to_cbz_adds_all_files_to_cbz",
"webcomix/tests/test_comic.py::test_verify_xpath",
"webcomix/tests/test_comic.py::test_verify_xpath_with_alt_text"
] | [
"webcomix/tests/test_comic.py::test_save_image_filename_with_title_present",
"webcomix/tests/test_comic.py::test_make_cbz",
"webcomix/tests/test_comic.py::test_make_cbz_corrupted_archive",
"webcomix/tests/test_comic.py::test_download_runs_the_worker",
"webcomix/tests/test_comic.py::test_download_does_not_add_crawlers_in_main_process",
"webcomix/tests/test_comic.py::test_verify_xpath_only_verifies_one_page_with_single_page",
"webcomix/tests/test_comic.py::test_download_will_run_javascript_settings_if_javascript",
"webcomix/tests/test_comic.py::test_download_will_not_run_javascript_settings_if_not_javascript",
"webcomix/tests/test_comic.py::test_verify_xpath_will_run_javascript_settings_if_javascript",
"webcomix/tests/test_comic.py::test_verify_xpath_will_not_run_javascript_settings_if_not_javascript"
] | [] | MIT License | 5,805 | 453 | [
"webcomix/comic.py",
"webcomix/supported_comics.py"
] |
|
omry__omegaconf-83 | 3c1859c62891924d5f95b266eb8d377b996df276 | 2019-11-14 08:12:49 | 3c1859c62891924d5f95b266eb8d377b996df276 | coveralls: ## Pull Request Test Coverage Report for [Build 244](https://coveralls.io/builds/26983060)
* **9** of **9** **(100.0%)** changed or added relevant lines in **2** files are covered.
* No unchanged relevant lines lost coverage.
* Overall coverage increased (+**0.02%**) to **97.834%**
---
| Totals | [](https://coveralls.io/builds/26983060) |
| :-- | --: |
| Change from base [Build 235](https://coveralls.io/builds/26939851): | 0.02% |
| Covered Lines: | 858 |
| Relevant Lines: | 877 |
---
##### 💛 - [Coveralls](https://coveralls.io)
| diff --git a/omegaconf/dictconfig.py b/omegaconf/dictconfig.py
index 7c180f8..808a687 100644
--- a/omegaconf/dictconfig.py
+++ b/omegaconf/dictconfig.py
@@ -1,4 +1,5 @@
from .config import Config
+import copy
from .errors import (
ReadonlyConfigError,
MissingMandatoryValue,
@@ -21,6 +22,15 @@ class DictConfig(Config):
self._deepcopy_impl(res)
return res
+ def __copy__(self):
+ res = DictConfig({})
+ res.__dict__["content"] = copy.copy(self.__dict__["content"])
+ res.__dict__["parent"] = self.__dict__["parent"]
+ return res
+
+ def copy(self):
+ return copy.copy(self)
+
def __setitem__(self, key, value):
assert isinstance(key, str)
diff --git a/omegaconf/listconfig.py b/omegaconf/listconfig.py
index 993ba31..3d56753 100644
--- a/omegaconf/listconfig.py
+++ b/omegaconf/listconfig.py
@@ -121,7 +121,7 @@ class ListConfig(Config):
return c
def copy(self):
- return self[:]
+ return copy.copy(self)
if six.PY2:
| OmegaConf object doesn't support a .copy method
A dict is supposed to have a .copy() method (https://docs.python.org/3/library/stdtypes.html#dict.copy). For an OmegaConf object to be compatible with a dict's API, it should also support the copy() method -
```
>>> from omegaconf import OmegaConf
>>> conf = OmegaConf.create({"a": 2, "b": [1,2,3]})
>>> conf
{'a': 2, 'b': [1, 2, 3]}
>>> conf.copy()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable
``` | omry/omegaconf | diff --git a/tests/test_base_config.py b/tests/test_base_config.py
index a35f714..747b4e4 100644
--- a/tests/test_base_config.py
+++ b/tests/test_base_config.py
@@ -332,3 +332,45 @@ def test_open_dict_restore(flag_name, ctx):
cfg.foo.bar = 20
assert cfg._get_node_flag(flag_name)
assert not cfg.foo._get_node_flag(flag_name)
+
+
[email protected](
+ "copy_method", [lambda x: copy.copy(x), lambda x: x.copy()],
+)
+class TestCopy:
+ @pytest.mark.parametrize(
+ "src", [[], [1, 2], ["a", "b", "c"], {}, {"a": "b"}, {"a": {"b": []}}],
+ )
+ def test_copy(self, copy_method, src):
+ src = OmegaConf.create(src)
+ cp = copy_method(src)
+ assert id(src) != id(cp)
+ assert src == cp
+
+ @pytest.mark.parametrize(
+ "src,interpolating_key,interpolated_key",
+ [([1, 2, "${0}"], 2, 0), ({"a": 10, "b": "${a}"}, "b", "a")],
+ )
+ def test_copy_with_interpolation(
+ self, copy_method, src, interpolating_key, interpolated_key
+ ):
+ cfg = OmegaConf.create(src)
+ assert cfg[interpolated_key] == cfg[interpolating_key]
+ cp = copy_method(cfg)
+ assert id(cfg) != id(cp)
+ assert cp[interpolated_key] == cp[interpolating_key]
+ assert cfg[interpolated_key] == cp[interpolating_key]
+
+ # Interpolation is preserved in original
+ cfg[interpolated_key] = "XXX"
+ assert cfg[interpolated_key] == cfg[interpolating_key]
+
+ # Test interpolation is preserved in copy
+ cp[interpolated_key] = "XXX"
+ assert cp[interpolated_key] == cp[interpolating_key]
+
+ def test_list_copy_is_shallow(self, copy_method):
+ cfg = OmegaConf.create([[10, 20]])
+ cp = copy_method(cfg)
+ assert id(cfg) != id(cp)
+ assert id(cfg[0]) == id(cp[0])
diff --git a/tests/test_basic_ops_list.py b/tests/test_basic_ops_list.py
index 9060228..32ec4db 100644
--- a/tests/test_basic_ops_list.py
+++ b/tests/test_basic_ops_list.py
@@ -252,14 +252,6 @@ def test_count(src, item, count):
assert src.count(item) == count
[email protected]("src", [[], [1, 2], ["a", "b", "c"]])
-def test_copy(src):
- src = OmegaConf.create(src)
- cp = src.copy()
- assert id(src) != id(cp)
- assert src == cp
-
-
def test_sort():
c = OmegaConf.create(["bbb", "aa", "c"])
c.sort()
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_hyperlinks",
"has_many_modified_files"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 2
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | argcomplete==3.1.2
black==23.3.0
bleach==6.0.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi==1.15.1
cfgv==3.3.1
charset-normalizer==3.4.1
click==8.1.8
click-default-group==1.2.4
colorlog==6.9.0
coverage==6.5.0
coveralls==3.3.1
cryptography==44.0.2
distlib==0.3.9
docopt==0.6.2
docutils==0.20.1
exceptiongroup==1.2.2
filelock==3.12.2
flake8==5.0.4
identify==2.5.24
idna==3.10
importlib-metadata==4.2.0
importlib-resources==5.12.0
incremental==22.10.0
iniconfig==2.0.0
jaraco.classes==3.2.3
jeepney==0.9.0
Jinja2==3.1.6
keyring==23.9.3
markdown-it-py==2.2.0
MarkupSafe==2.1.5
mccabe==0.7.0
mdurl==0.1.2
more-itertools==9.1.0
mypy-extensions==1.0.0
nodeenv==1.9.1
nox==2024.4.15
-e git+https://github.com/omry/omegaconf.git@3c1859c62891924d5f95b266eb8d377b996df276#egg=omegaconf
packaging==24.0
pathspec==0.11.2
pkginfo==1.10.0
platformdirs==2.6.2
pluggy==1.2.0
pre-commit==2.21.0
pycodestyle==2.9.1
pycparser==2.21
pyflakes==2.5.0
Pygments==2.17.2
pytest==7.4.4
PyYAML==6.0.1
readme-renderer==37.3
requests==2.31.0
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==13.8.1
SecretStorage==3.3.3
six==1.17.0
tomli==2.0.1
towncrier==23.6.0
twine==4.0.2
typed-ast==1.5.5
typing_extensions==4.7.1
urllib3==2.0.7
virtualenv==20.16.2
webencodings==0.5.1
zipp==3.15.0
| name: omegaconf
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=22.3.1=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- argcomplete==3.1.2
- black==23.3.0
- bleach==6.0.0
- cffi==1.15.1
- cfgv==3.3.1
- charset-normalizer==3.4.1
- click==8.1.8
- click-default-group==1.2.4
- colorlog==6.9.0
- coverage==6.5.0
- coveralls==3.3.1
- cryptography==44.0.2
- distlib==0.3.9
- docopt==0.6.2
- docutils==0.20.1
- exceptiongroup==1.2.2
- filelock==3.12.2
- flake8==5.0.4
- identify==2.5.24
- idna==3.10
- importlib-metadata==4.2.0
- importlib-resources==5.12.0
- incremental==22.10.0
- iniconfig==2.0.0
- jaraco-classes==3.2.3
- jeepney==0.9.0
- jinja2==3.1.6
- keyring==23.9.3
- markdown-it-py==2.2.0
- markupsafe==2.1.5
- mccabe==0.7.0
- mdurl==0.1.2
- more-itertools==9.1.0
- mypy-extensions==1.0.0
- nodeenv==1.9.1
- nox==2024.4.15
- omegaconf==1.4.0rc3
- packaging==24.0
- pathspec==0.11.2
- pkginfo==1.10.0
- platformdirs==2.6.2
- pluggy==1.2.0
- pre-commit==2.21.0
- pycodestyle==2.9.1
- pycparser==2.21
- pyflakes==2.5.0
- pygments==2.17.2
- pytest==7.4.4
- pyyaml==6.0.1
- readme-renderer==37.3
- requests==2.31.0
- requests-toolbelt==1.0.0
- rfc3986==2.0.0
- rich==13.8.1
- secretstorage==3.3.3
- six==1.17.0
- tomli==2.0.1
- towncrier==23.6.0
- twine==4.0.2
- typed-ast==1.5.5
- typing-extensions==4.7.1
- urllib3==2.0.7
- virtualenv==20.16.2
- webencodings==0.5.1
- zipp==3.15.0
prefix: /opt/conda/envs/omegaconf
| [
"tests/test_base_config.py::TestCopy::test_copy[src3-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy[src4-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy[src5-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy_with_interpolation[src0-2-0-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy_with_interpolation[src1-b-a-<lambda>1]"
] | [] | [
"tests/test_base_config.py::test_set_value[input_0-foo-10-expected0]",
"tests/test_base_config.py::test_set_value[input_1-foo-value1-expected1]",
"tests/test_base_config.py::test_set_value[input_2-foo-value2-expected2]",
"tests/test_base_config.py::test_set_value[input_3-foo-value3-expected3]",
"tests/test_base_config.py::test_set_value[input_4-0-10-expected4]",
"tests/test_base_config.py::test_set_value[input_5-1-10-expected5]",
"tests/test_base_config.py::test_set_value[input_6-1-value6-expected6]",
"tests/test_base_config.py::test_set_value[input_7-1-value7-expected7]",
"tests/test_base_config.py::test_set_value[input_8-1-value8-expected8]",
"tests/test_base_config.py::test_set_value_validation_fail[input_0-foo-str]",
"tests/test_base_config.py::test_set_value_validation_fail[input_1-1-str]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_0]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_1]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_2]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_3]",
"tests/test_base_config.py::test_to_container_returns_primitives[input_4]",
"tests/test_base_config.py::test_empty[input_0-True]",
"tests/test_base_config.py::test_empty[input_1-True]",
"tests/test_base_config.py::test_empty[input_2-False]",
"tests/test_base_config.py::test_empty[input_3-False]",
"tests/test_base_config.py::test_repr[input_0]",
"tests/test_base_config.py::test_repr[input_1]",
"tests/test_base_config.py::test_repr[input_2]",
"tests/test_base_config.py::test_repr[input_3]",
"tests/test_base_config.py::test_repr[input_4]",
"tests/test_base_config.py::test_repr[input_5]",
"tests/test_base_config.py::test_repr[input_6]",
"tests/test_base_config.py::test_str[input_0]",
"tests/test_base_config.py::test_str[input_1]",
"tests/test_base_config.py::test_str[input_2]",
"tests/test_base_config.py::test_str[input_3]",
"tests/test_base_config.py::test_str[input_4]",
"tests/test_base_config.py::test_str[input_5]",
"tests/test_base_config.py::test_str[input_6]",
"tests/test_base_config.py::test_flag_dict[readonly]",
"tests/test_base_config.py::test_flag_dict[struct]",
"tests/test_base_config.py::test_freeze_nested_dict[readonly]",
"tests/test_base_config.py::test_freeze_nested_dict[struct]",
"tests/test_base_config.py::test_deepcopy[src0]",
"tests/test_base_config.py::test_deepcopy[src1]",
"tests/test_base_config.py::test_deepcopy[src2]",
"tests/test_base_config.py::test_deepcopy[src3]",
"tests/test_base_config.py::test_deepcopy_readonly[src0]",
"tests/test_base_config.py::test_deepcopy_readonly[src1]",
"tests/test_base_config.py::test_deepcopy_readonly[src2]",
"tests/test_base_config.py::test_deepcopy_readonly[src3]",
"tests/test_base_config.py::test_deepcopy_struct[src0]",
"tests/test_base_config.py::test_deepcopy_struct[src1]",
"tests/test_base_config.py::test_deepcopy_struct[src2]",
"tests/test_base_config.py::test_deepcopy_struct[src3]",
"tests/test_base_config.py::test_deepcopy_after_del",
"tests/test_base_config.py::test_deepcopy_with_interpolation",
"tests/test_base_config.py::test_deepcopy_and_merge_and_flags",
"tests/test_base_config.py::test_flag_override[src0-struct-False-<lambda>-expectation0]",
"tests/test_base_config.py::test_flag_override[src1-readonly-False-<lambda>-expectation1]",
"tests/test_base_config.py::test_read_write_override[src0-<lambda>-expectation0]",
"tests/test_base_config.py::test_read_write_override[src1-<lambda>-expectation1]",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,cat-tokenized0]",
"tests/test_base_config.py::test_tokenize_with_escapes[dog\\\\,cat\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[dog,",
"tests/test_base_config.py::test_tokenize_with_escapes[whitespace\\\\",
"tests/test_base_config.py::test_tokenize_with_escapes[None-tokenized8]",
"tests/test_base_config.py::test_tokenize_with_escapes[-tokenized9]",
"tests/test_base_config.py::test_tokenize_with_escapes[no",
"tests/test_base_config.py::test_struct_override[src0-<lambda>-expectation0]",
"tests/test_base_config.py::test_open_dict_restore[struct-open_dict]",
"tests/test_base_config.py::test_open_dict_restore[readonly-read_write]",
"tests/test_base_config.py::TestCopy::test_copy[src0-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy[src0-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy[src1-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy[src1-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy[src2-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy[src2-<lambda>1]",
"tests/test_base_config.py::TestCopy::test_copy[src3-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy[src4-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy[src5-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy_with_interpolation[src0-2-0-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_copy_with_interpolation[src1-b-a-<lambda>0]",
"tests/test_base_config.py::TestCopy::test_list_copy_is_shallow[<lambda>0]",
"tests/test_base_config.py::TestCopy::test_list_copy_is_shallow[<lambda>1]",
"tests/test_basic_ops_list.py::test_list_value",
"tests/test_basic_ops_list.py::test_list_of_dicts",
"tests/test_basic_ops_list.py::test_pretty_list",
"tests/test_basic_ops_list.py::test_list_get_with_default",
"tests/test_basic_ops_list.py::test_iterate_list",
"tests/test_basic_ops_list.py::test_items_with_interpolation",
"tests/test_basic_ops_list.py::test_list_pop",
"tests/test_basic_ops_list.py::test_in_list",
"tests/test_basic_ops_list.py::test_list_config_with_list",
"tests/test_basic_ops_list.py::test_list_config_with_tuple",
"tests/test_basic_ops_list.py::test_items_on_list",
"tests/test_basic_ops_list.py::test_list_enumerate",
"tests/test_basic_ops_list.py::test_list_delitem",
"tests/test_basic_ops_list.py::test_list_len",
"tests/test_basic_ops_list.py::test_assign[parent0-0-value0-expected0]",
"tests/test_basic_ops_list.py::test_assign[parent1-0-value1-expected1]",
"tests/test_basic_ops_list.py::test_assign[parent2-foo-value2-expected2]",
"tests/test_basic_ops_list.py::test_assign[parent3-foo-value3-expected3]",
"tests/test_basic_ops_list.py::test_nested_list_assign_illegal_value",
"tests/test_basic_ops_list.py::test_list_append",
"tests/test_basic_ops_list.py::test_pretty_without_resolve",
"tests/test_basic_ops_list.py::test_pretty_with_resolve",
"tests/test_basic_ops_list.py::test_index_slice",
"tests/test_basic_ops_list.py::test_index_slice2",
"tests/test_basic_ops_list.py::test_negative_index",
"tests/test_basic_ops_list.py::test_list_dir",
"tests/test_basic_ops_list.py::test_getattr",
"tests/test_basic_ops_list.py::test_insert",
"tests/test_basic_ops_list.py::test_extend[src0-append0-result0]",
"tests/test_basic_ops_list.py::test_extend[src1-append1-result1]",
"tests/test_basic_ops_list.py::test_extend[src2-append2-result2]",
"tests/test_basic_ops_list.py::test_remove[src0-10-result0-expectation0]",
"tests/test_basic_ops_list.py::test_remove[src1-oops-None-expectation1]",
"tests/test_basic_ops_list.py::test_remove[src2-remove2-result2-expectation2]",
"tests/test_basic_ops_list.py::test_remove[src3-2-result3-expectation3]",
"tests/test_basic_ops_list.py::test_clear[1-src0]",
"tests/test_basic_ops_list.py::test_clear[1-src1]",
"tests/test_basic_ops_list.py::test_clear[1-src2]",
"tests/test_basic_ops_list.py::test_clear[2-src0]",
"tests/test_basic_ops_list.py::test_clear[2-src1]",
"tests/test_basic_ops_list.py::test_clear[2-src2]",
"tests/test_basic_ops_list.py::test_index[src0-20--1-expectation0]",
"tests/test_basic_ops_list.py::test_index[src1-10-0-expectation1]",
"tests/test_basic_ops_list.py::test_index[src2-20-1-expectation2]",
"tests/test_basic_ops_list.py::test_count[src0-10-0]",
"tests/test_basic_ops_list.py::test_count[src1-10-1]",
"tests/test_basic_ops_list.py::test_count[src2-10-2]",
"tests/test_basic_ops_list.py::test_count[src3-None-0]",
"tests/test_basic_ops_list.py::test_sort",
"tests/test_basic_ops_list.py::test_list_eq[l10-l20]",
"tests/test_basic_ops_list.py::test_list_eq[l11-l21]",
"tests/test_basic_ops_list.py::test_list_eq[l12-l22]",
"tests/test_basic_ops_list.py::test_list_eq[l13-l23]",
"tests/test_basic_ops_list.py::test_list_eq[l14-l24]",
"tests/test_basic_ops_list.py::test_list_eq[l15-l25]",
"tests/test_basic_ops_list.py::test_list_eq[l16-l26]",
"tests/test_basic_ops_list.py::test_list_eq_with_interpolation[l10-l20]",
"tests/test_basic_ops_list.py::test_list_not_eq[input10-input20]",
"tests/test_basic_ops_list.py::test_list_not_eq[input11-input21]",
"tests/test_basic_ops_list.py::test_list_not_eq[input12-input22]",
"tests/test_basic_ops_list.py::test_list_not_eq[input13-input23]",
"tests/test_basic_ops_list.py::test_list_not_eq[input14-input24]",
"tests/test_basic_ops_list.py::test_list_not_eq[input15-input25]",
"tests/test_basic_ops_list.py::test_list_not_eq[input16-input26]",
"tests/test_basic_ops_list.py::test_insert_throws_not_changing_list",
"tests/test_basic_ops_list.py::test_append_throws_not_changing_list",
"tests/test_basic_ops_list.py::test_hash",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus[list10-list20-expected0]",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus[list11-list21-expected1]",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus[list12-list22-expected2]",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus_eq[list10-list20-expected0]",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus_eq[list11-list21-expected1]",
"tests/test_basic_ops_list.py::TestListAdd::test_list_plus_eq[list12-list22-expected2]"
] | [] | BSD 3-Clause "New" or "Revised" License | 5,811 | 327 | [
"omegaconf/dictconfig.py",
"omegaconf/listconfig.py"
] |
cgarwood__python-openzwave-mqtt-10 | 248ce41a6cf3d536a44282376ed88e2e6dd9e995 | 2019-11-14 15:44:30 | 248ce41a6cf3d536a44282376ed88e2e6dd9e995 | diff --git a/openzwavemqtt/base.py b/openzwavemqtt/base.py
index bc529ea..a524c2b 100644
--- a/openzwavemqtt/base.py
+++ b/openzwavemqtt/base.py
@@ -1,5 +1,5 @@
from abc import ABC
-from typing import Callable, Deque
+from typing import Callable, Deque, Optional
from .const import EVENT_PLACEHOLDER, EMPTY, LOGGER
from .options import OZWOptions
@@ -7,9 +7,13 @@ from .options import OZWOptions
class ItemCollection:
def __init__(
- self, options: OZWOptions, item_class: Callable[[OZWOptions, str], "ZwaveBase"]
+ self,
+ options: OZWOptions,
+ parent: "ZWaveBase",
+ item_class: Callable[[OZWOptions, str], "ZwaveBase"],
):
self.options = options
+ self.parent = parent
self.item_class = item_class
self.collection = {}
@@ -30,7 +34,9 @@ class ItemCollection:
return
elif item is None:
- item = self.collection[item_id] = self.item_class(self.options, item_id)
+ item = self.collection[item_id] = self.item_class(
+ self.options, self.parent, item_id
+ )
added = True
if len(topic) == 0 and message is EMPTY:
@@ -67,8 +73,11 @@ class ZWaveBase(ABC):
EVENT_CHANGED = EVENT_PLACEHOLDER
EVENT_REMOVED = EVENT_PLACEHOLDER
- def __init__(self, options: OZWOptions, item_id: str):
+ def __init__(
+ self, options: OZWOptions, parent: Optional["ZWaveBase"], item_id: str
+ ):
self.options = options
+ self.parent = parent
self.id = item_id
self.collections = self.create_collections()
self.data = EMPTY
diff --git a/openzwavemqtt/manager.py b/openzwavemqtt/manager.py
index d3bfe23..f94f3ed 100644
--- a/openzwavemqtt/manager.py
+++ b/openzwavemqtt/manager.py
@@ -12,11 +12,11 @@ class OZWManager(ZWaveBase):
EVENT_CHANGED = None
def __init__(self, options: OZWOptions):
- super().__init__(options, None)
+ super().__init__(options, None, None)
def create_collections(self):
"""Create collections that the manager supports."""
- return {"instance": ItemCollection(self.options, OZWInstance)}
+ return {"instance": ItemCollection(self.options, self, OZWInstance)}
def receive_message(self, topic: str, message: dict):
"""Receive an MQTT message."""
diff --git a/openzwavemqtt/models/instance.py b/openzwavemqtt/models/instance.py
index e05045c..1ba983f 100644
--- a/openzwavemqtt/models/instance.py
+++ b/openzwavemqtt/models/instance.py
@@ -45,7 +45,7 @@ class OZWInstance(ZWaveBase):
def create_collections(self):
"""Create collections that Node supports."""
return {
- "node": ItemCollection(self.options, OZWNode),
- "statistics": OZWInstanceStatistics(self.options, None),
- "status": OZWInstanceStatus(self.options, None),
+ "node": ItemCollection(self.options, self, OZWNode),
+ "statistics": OZWInstanceStatistics(self.options, self, None),
+ "status": OZWInstanceStatus(self.options, self, None),
}
diff --git a/openzwavemqtt/models/node.py b/openzwavemqtt/models/node.py
index 2d32d89..7270b47 100644
--- a/openzwavemqtt/models/node.py
+++ b/openzwavemqtt/models/node.py
@@ -14,6 +14,6 @@ class OZWNode(ZWaveBase):
def create_collections(self):
"""Create collections that Node supports."""
return {
- "value": ItemCollection(self.options, OZWValue),
- "statistics": OZWNodeStatistics(self.options, OZWNodeStatistics),
+ "value": ItemCollection(self.options, self, OZWValue),
+ "statistics": OZWNodeStatistics(self.options, self, OZWNodeStatistics),
}
| Pass "parent" objects to children
Making issues for some items on the todo list in case anyone else wants to jump in with some code.
---
If we have an OZWValue object, we need an easy way to get to the OZWNode and OZWInstance objects that the value belongs to. Right now its easy to go from Instance > Node > Value, but not the other way around. | cgarwood/python-openzwave-mqtt | diff --git a/test/models/test_node.py b/test/models/test_node_statistics.py
similarity index 97%
rename from test/models/test_node.py
rename to test/models/test_node_statistics.py
index 4dbdd2c..ac6a27c 100644
--- a/test/models/test_node.py
+++ b/test/models/test_node_statistics.py
@@ -41,3 +41,4 @@ def test_statistics(mgr):
assert statistics.average_response_rtt == 47
assert statistics.average_request_rtt == 31
assert statistics.send_count == 10
+ assert statistics.parent.id == "2"
diff --git a/test/models/test_value.py b/test/models/test_value.py
index 92fc5c4..2d06403 100644
--- a/test/models/test_value.py
+++ b/test/models/test_value.py
@@ -14,6 +14,7 @@ def test_value_events(mgr):
assert len(events) == 1
assert events[0].id == "3"
assert events[0].value == "yo"
+ assert events[0].parent.id == "2"
# Listen for value changed
mgr.options.listen(EVENT_VALUE_CHANGED, events.append)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 0
},
"num_modified_files": 4
} | unknown | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[test]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "pytest",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.7",
"reqs_path": null,
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | attrs @ file:///croot/attrs_1668696182826/work
certifi @ file:///croot/certifi_1671487769961/work/certifi
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1648562407465/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
packaging @ file:///croot/packaging_1671697413597/work
pluggy @ file:///tmp/build/80754af9/pluggy_1648042572264/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pytest==7.1.2
-e git+https://github.com/cgarwood/python-openzwave-mqtt.git@248ce41a6cf3d536a44282376ed88e2e6dd9e995#egg=python_openzwave_mqtt
tomli @ file:///opt/conda/conda-bld/tomli_1657175507142/work
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
zipp @ file:///croot/zipp_1672387121353/work
| name: python-openzwave-mqtt
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- attrs=22.1.0=py37h06a4308_0
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2022.12.7=py37h06a4308_0
- flit-core=3.6.0=pyhd3eb1b0_0
- importlib-metadata=4.11.3=py37h06a4308_0
- importlib_metadata=4.11.3=hd3eb1b0_0
- iniconfig=1.1.1=pyhd3eb1b0_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- packaging=22.0=py37h06a4308_0
- pip=22.3.1=py37h06a4308_0
- pluggy=1.0.0=py37h06a4308_1
- py=1.11.0=pyhd3eb1b0_0
- pytest=7.1.2=py37h06a4308_0
- python=3.7.16=h7a1cb2a_0
- readline=8.2=h5eee18b_0
- setuptools=65.6.3=py37h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- tomli=2.0.1=py37h06a4308_0
- typing_extensions=4.4.0=py37h06a4308_0
- wheel=0.38.4=py37h06a4308_0
- xz=5.6.4=h5eee18b_1
- zipp=3.11.0=py37h06a4308_0
- zlib=1.2.13=h5eee18b_1
prefix: /opt/conda/envs/python-openzwave-mqtt
| [
"test/models/test_node_statistics.py::test_statistics",
"test/models/test_value.py::test_value_events"
] | [] | [] | [] | Apache License 2.0 | 5,813 | 1,023 | [
"openzwavemqtt/base.py",
"openzwavemqtt/manager.py",
"openzwavemqtt/models/instance.py",
"openzwavemqtt/models/node.py"
] |
|
Materials-Consortia__optimade-python-tools-83 | 7aa23880f41492f21f2eb2b6379cf78d807f60a3 | 2019-11-14 15:52:33 | 54d17743a5c8f99358a2386e6edf01b48032343e | diff --git a/optimade/models/jsonapi.py b/optimade/models/jsonapi.py
index a6bbd197..80189d34 100644
--- a/optimade/models/jsonapi.py
+++ b/optimade/models/jsonapi.py
@@ -1,5 +1,5 @@
"""This module should reproduce JSON API v1.0 https://jsonapi.org/format/1.0/"""
-from typing import Optional, Set, Union
+from typing import Optional, Set, Union, Any
from pydantic import BaseModel, UrlStr, Schema, validator
@@ -180,6 +180,13 @@ class Relationships(BaseModel):
id
"""
+ id: Optional[Any] = Schema(..., description="Not allowed key")
+ type: Optional[Any] = Schema(..., description="Not allowed key")
+
+ @validator("id", "type")
+ def check_illegal_relationships_fields(cls, v):
+ raise AssertionError('"id", "type" MUST NOT be fields under relationships')
+
class ResourceLinks(BaseModel):
"""A Resource Links object"""
@@ -193,16 +200,27 @@ class ResourceLinks(BaseModel):
class Attributes(BaseModel):
"""
Members of the attributes object ("attributes\") represent information about the resource object in which it's defined.
- The keys for Attributes must NOT be:
+ The keys for Attributes MUST NOT be:
relationships
links
id
type
"""
+ relationships: Optional[Any] = Schema(..., description="Not allowed key")
+ links: Optional[Any] = Schema(..., description="Not allowed key")
+ id: Optional[Any] = Schema(..., description="Not allowed key")
+ type: Optional[Any] = Schema(..., description="Not allowed key")
+
class Config:
extra = "allow"
+ @validator("relationships", "links", "id", "type")
+ def check_illegal_attributes_fields(cls, v):
+ raise ValueError(
+ '"relationships", "links", "id", "type" MUST NOT be fields under attributes'
+ )
+
class Resource(BaseResource):
"""Resource objects appear in a JSON:API document to represent resources."""
diff --git a/optimade/models/structures.py b/optimade/models/structures.py
index c24d2429..31f23b3f 100644
--- a/optimade/models/structures.py
+++ b/optimade/models/structures.py
@@ -621,7 +621,7 @@ class StructureResourceAttributes(EntryResourceAttributes):
@validator("chemical_formula_reduced", "chemical_formula_hill")
def no_spaces_in_reduced(cls, v):
- if " " in v:
+ if v and " " in v:
raise ValueError(f"Spaces are not allowed, you passed: {v}")
return v
| server.jsonapi has no patternProperties
In the current version of pydanitc patternProperties can only be added inside sub-elements of properties. If that is fixed then remove items from optimade.server.models.jsonapi.Attributes and optimade.server.models.jsonapi.Realationships and make it match that new formalism.
Somewhere in the creation of the openapi.json file the patternProperties is lost (likely something is using an old version of pydantic that converts a Dict object to object if the key is not a string). | Materials-Consortia/optimade-python-tools | diff --git a/optimade/models/tests/test_bad_structures.json b/optimade/models/tests/test_bad_structures.json
index 99bcce98..fd9bf7ea 100644
--- a/optimade/models/tests/test_bad_structures.json
+++ b/optimade/models/tests/test_bad_structures.json
@@ -1494,5 +1494,73 @@
],
"structure_features": ["assemblies"],
"task_id": "mpf_276"
+ },
+ {
+ "_id": {
+ "$oid": "5cfb441f053b174410700d02"
+ },
+ "type": "structure",
+ "links": ["C-H"],
+ "relationships": "single",
+ "id": "some_id",
+ "_mp_chemsys": "Ac",
+ "cartesian_site_positions": [
+ [
+ 0.17570227444196573,
+ 0.17570227444196573,
+ 0.17570227444196573
+ ]
+ ],
+ "dimension_types": [
+ 1,
+ 1,
+ 1
+ ],
+ "elements": [
+ "Ac"
+ ],
+ "elements_ratios": [
+ 1.0
+ ],
+ "formula_anonymous": "A",
+ "last_modified": {
+ "$date": "2019-06-08T05:13:37.331Z"
+ },
+ "lattice_vectors": [
+ [
+ 1.2503264826932692,
+ 0,
+ 0
+ ],
+ [
+ 0,
+ 9.888509716321765,
+ 0
+ ],
+ [
+ 0,
+ 0,
+ 0.2972637673241818
+ ]
+ ],
+ "nelements": 1,
+ "nsites": 1,
+ "pretty_formula": "Ac",
+ "species": [
+ {
+ "chemical_symbols": [
+ "Ac"
+ ],
+ "concentration": [
+ 1.0
+ ],
+ "name": "Ac"
+ }
+ ],
+ "species_at_sites": [
+ "Ac"
+ ],
+ "structure_features": [],
+ "task_id": "mpf_1"
}
]
| {
"commit_name": "head_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 2
} | 0.1 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .[dev]",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.8",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | aiofiles==24.1.0
backports.tarfile==1.2.0
black==24.8.0
certifi==2025.1.31
cffi==1.17.1
cfgv==3.4.0
charset-normalizer==3.4.1
click==8.1.8
cryptography==44.0.2
distlib==0.3.9
dnspython==2.6.1
docutils==0.20.1
email_validator==2.2.0
exceptiongroup==1.2.2
fastapi==0.28.0
filelock==3.16.1
graphene==3.4.3
graphql-core==3.2.6
graphql-relay==3.2.0
h11==0.14.0
id==1.5.0
identify==2.6.1
idna==3.10
importlib_metadata==8.5.0
importlib_resources==6.4.5
iniconfig==2.1.0
invoke==2.2.0
itsdangerous==2.2.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.1.0
jeepney==0.9.0
Jinja2==3.1.6
keyring==25.5.0
lark-parser==0.7.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
mongomock==3.16.0
more-itertools==10.5.0
mypy-extensions==1.0.0
nh3==0.2.21
nodeenv==1.9.1
-e git+https://github.com/Materials-Consortia/optimade-python-tools.git@7aa23880f41492f21f2eb2b6379cf78d807f60a3#egg=optimade
packaging==24.2
pathspec==0.12.1
platformdirs==4.3.6
pluggy==1.5.0
pre-commit==3.5.0
pycparser==2.22
pydantic==0.26
Pygments==2.19.1
pymongo==3.8.0
pytest==8.3.5
python-dateutil==2.9.0.post0
python-multipart==0.0.20
PyYAML==6.0.2
readme_renderer==43.0
requests==2.32.3
requests-toolbelt==1.0.0
rfc3986==2.0.0
rich==14.0.0
SecretStorage==3.3.3
sentinels==1.0.0
six==1.17.0
starlette==0.12.0
tomli==2.2.1
twine==6.1.0
typing_extensions==4.13.0
ujson==5.10.0
urllib3==2.2.3
uvicorn==0.33.0
virtualenv==20.29.3
zipp==3.20.2
| name: optimade-python-tools
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.4.4=h6a678d5_1
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=3.0.16=h5eee18b_0
- pip=24.2=py38h06a4308_0
- python=3.8.20=he870216_0
- readline=8.2=h5eee18b_0
- setuptools=75.1.0=py38h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.44.0=py38h06a4308_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- aiofiles==24.1.0
- backports-tarfile==1.2.0
- black==24.8.0
- certifi==2025.1.31
- cffi==1.17.1
- cfgv==3.4.0
- charset-normalizer==3.4.1
- click==8.1.8
- cryptography==44.0.2
- distlib==0.3.9
- dnspython==2.6.1
- docutils==0.20.1
- email-validator==2.2.0
- exceptiongroup==1.2.2
- fastapi==0.28.0
- filelock==3.16.1
- graphene==3.4.3
- graphql-core==3.2.6
- graphql-relay==3.2.0
- h11==0.14.0
- id==1.5.0
- identify==2.6.1
- idna==3.10
- importlib-metadata==8.5.0
- importlib-resources==6.4.5
- iniconfig==2.1.0
- invoke==2.2.0
- itsdangerous==2.2.0
- jaraco-classes==3.4.0
- jaraco-context==6.0.1
- jaraco-functools==4.1.0
- jeepney==0.9.0
- jinja2==3.1.6
- keyring==25.5.0
- lark-parser==0.7.7
- markdown-it-py==3.0.0
- markupsafe==2.1.5
- mdurl==0.1.2
- mongomock==3.16.0
- more-itertools==10.5.0
- mypy-extensions==1.0.0
- nh3==0.2.21
- nodeenv==1.9.1
- packaging==24.2
- pathspec==0.12.1
- platformdirs==4.3.6
- pluggy==1.5.0
- pre-commit==3.5.0
- pycparser==2.22
- pydantic==0.26
- pygments==2.19.1
- pymongo==3.8.0
- pytest==8.3.5
- python-dateutil==2.9.0.post0
- python-multipart==0.0.20
- pyyaml==6.0.2
- readme-renderer==43.0
- requests==2.32.3
- requests-toolbelt==1.0.0
- rfc3986==2.0.0
- rich==14.0.0
- secretstorage==3.3.3
- sentinels==1.0.0
- six==1.17.0
- starlette==0.12.0
- tomli==2.2.1
- twine==6.1.0
- typing-extensions==4.13.0
- ujson==5.10.0
- urllib3==2.2.3
- uvicorn==0.33.0
- virtualenv==20.29.3
- zipp==3.20.2
prefix: /opt/conda/envs/optimade-python-tools
| [
"optimade/models/tests/test_models.py::TestPydanticValidation::test_bad_structures"
] | [] | [
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_9_5::test_inputs",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_9_5::test_parser_version",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_9_5::test_repr",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_empty",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_list_properties",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_number_values",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_operators",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_parser_version",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_precedence",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_properties",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_property_names",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_repr",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_special_cases",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_string_operations",
"optimade/filterparser/tests/test_filterparser.py::ParserTestV0_10_0::test_string_values",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_empty",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_number_values",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_operators",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_precedence",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_properties",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_property_names",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_simple_comparisons",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_special_cases",
"optimade/filtertransformers/tests/test_mongo.py::TestMongoTransformer::test_string_values",
"optimade/filtertransformers/tests/test_transformer.py::TestTransformer::test_conjunctions",
"optimade/filtertransformers/tests/test_transformer.py::TestTransformer::test_not",
"optimade/filtertransformers/tests/test_transformer.py::TestTransformer::test_simple_comparisons",
"optimade/models/tests/test_models.py::TestPydanticValidation::test_good_structures",
"optimade/models/tests/test_models.py::test_constrained_list",
"optimade/server/tests/test_server.py::InfoEndpointTests::test_info_endpoint_attributes",
"optimade/server/tests/test_server.py::InfoEndpointTests::test_meta_response",
"optimade/server/tests/test_server.py::InfoEndpointTests::test_serialize_response",
"optimade/server/tests/test_server.py::InfoStructuresEndpointTests::test_info_structures_endpoint_data",
"optimade/server/tests/test_server.py::InfoStructuresEndpointTests::test_meta_response",
"optimade/server/tests/test_server.py::InfoStructuresEndpointTests::test_serialize_response",
"optimade/server/tests/test_server.py::StructuresEndpointTests::test_get_next_responses",
"optimade/server/tests/test_server.py::StructuresEndpointTests::test_meta_response",
"optimade/server/tests/test_server.py::StructuresEndpointTests::test_serialize_response",
"optimade/server/tests/test_server.py::StructuresEndpointTests::test_structures_endpoint_data",
"optimade/server/tests/test_server.py::SingleStructureEndpointTests::test_meta_response",
"optimade/server/tests/test_server.py::SingleStructureEndpointTests::test_serialize_response",
"optimade/server/tests/test_server.py::SingleStructureEndpointTests::test_structures_endpoint_data",
"optimade/server/tests/test_server.py::ServerTestWithValidator::test_with_validator",
"optimade/server/tests/test_server.py::SingleStructureEndpointEmptyTest::test_meta_response",
"optimade/server/tests/test_server.py::SingleStructureEndpointEmptyTest::test_serialize_response",
"optimade/server/tests/test_server.py::SingleStructureEndpointEmptyTest::test_structures_endpoint_data",
"optimade/server/tests/test_server.py::FilterTests::test_aliased_fields",
"optimade/server/tests/test_server.py::FilterTests::test_aliased_is_known",
"optimade/server/tests/test_server.py::FilterTests::test_brackets",
"optimade/server/tests/test_server.py::FilterTests::test_custom_field",
"optimade/server/tests/test_server.py::FilterTests::test_geq",
"optimade/server/tests/test_server.py::FilterTests::test_gt",
"optimade/server/tests/test_server.py::FilterTests::test_gt_none",
"optimade/server/tests/test_server.py::FilterTests::test_id",
"optimade/server/tests/test_server.py::FilterTests::test_is_known",
"optimade/server/tests/test_server.py::FilterTests::test_list_has",
"optimade/server/tests/test_server.py::FilterTests::test_list_has_and",
"optimade/server/tests/test_server.py::FilterTests::test_list_length_basic",
"optimade/server/tests/test_server.py::FilterTests::test_not_or_and_precedence",
"optimade/server/tests/test_server.py::FilterTests::test_page_limit",
"optimade/server/tests/test_server.py::FilterTests::test_string_contains",
"optimade/server/tests/test_server.py::FilterTests::test_string_end",
"optimade/server/tests/test_server.py::FilterTests::test_string_start"
] | [] | MIT License | 5,814 | 648 | [
"optimade/models/jsonapi.py",
"optimade/models/structures.py"
] |
|
blue-yonder__tsfresh-586 | 6309f82c104bf4e20648d8d2d2759b3f3162ae36 | 2019-11-16 16:12:23 | d81c57dc826c522484f0fd268144d189517df6cb | coveralls:
[](https://coveralls.io/builds/27032390)
Coverage remained the same at ?% when pulling **a1410bf77a2d95a98c046002c307cf74b1e5539e on feature/548-check-for-double-underscores** into **e6f5d3eb0cdc8c307a8781a839b5c05da29e8f06 on master**.
| diff --git a/setup.py b/setup.py
index be4e01e..9022dc0 100644
--- a/setup.py
+++ b/setup.py
@@ -25,6 +25,6 @@ setup(
long_description=long_description,
long_description_content_type="text/markdown",
setup_requires=["six", "setuptools_scm"] + sphinx,
- packages=find_packages(),
+ packages=find_packages(exclude=["tests.*", "tests"]),
install_requires=requirements,
)
diff --git a/tsfresh/utilities/dataframe_functions.py b/tsfresh/utilities/dataframe_functions.py
index 8d9a59c..8f507da 100644
--- a/tsfresh/utilities/dataframe_functions.py
+++ b/tsfresh/utilities/dataframe_functions.py
@@ -354,6 +354,14 @@ def _normalize_input_to_internal_representation(timeseries_container, column_id,
# The kind columns should always be of type "str" to make the inference of feature settings later in `from_columns`
# work
timeseries_container[column_kind] = timeseries_container[column_kind].astype(str)
+
+ # Make sure we have only parsable names
+ for kind in timeseries_container[column_kind].unique():
+ if kind.endswith("_"):
+ raise ValueError("The kind {kind} is not allowed to end with '_'".format(kind=kind))
+ if "__" in kind:
+ raise ValueError("The kind {kind} is not allowed to contain '__'".format(kind=kind))
+
return timeseries_container, column_id, column_kind, column_value
| underscore not allowed in time series kind names
there is a bug in tsfresh.feature_extraction.settings.from_columns which causes it to raise an error.
If the name of time series column ends with `_`, e.g. `temeprature_valve_`, you get such an error:
```
ValueError Traceback (most recent call last)
<ipython-input-79-9e22830f8581> in <module>
---> 12 kind_to_fc_parameters = tsfresh.feature_extraction.settings.from_columns(nan_in_cols.transpose().columns)
/opt/anaconda/anaconda3/envs/MLEP/lib/python3.6/site-packages/tsfresh/feature_extraction/settings.py in from_columns(columns, columns_to_ignore)
65
66 if not hasattr(feature_calculators, feature_name):
---> 67 raise ValueError("Unknown feature name {}".format(feature_name))
68
69 config = get_config_from_string(parts)
ValueError: Unknown feature name _agg_linear_trend
``` | blue-yonder/tsfresh | diff --git a/tests/units/utilities/test_dataframe_functions.py b/tests/units/utilities/test_dataframe_functions.py
index 8181a97..2d0f8bb 100644
--- a/tests/units/utilities/test_dataframe_functions.py
+++ b/tests/units/utilities/test_dataframe_functions.py
@@ -138,6 +138,12 @@ class NormalizeTestCase(TestCase):
self.assertRaises(ValueError, dataframe_functions._normalize_input_to_internal_representation, test_df,
None, None, None, "value")
+ test_df = pd.DataFrame([{"id": 0, "a_": 3, "b": 5, "sort": 1}])
+ self.assertRaises(ValueError, dataframe_functions._normalize_input_to_internal_representation, test_df, "id", "sort", None, None)
+
+ test_df = pd.DataFrame([{"id": 0, "a__c": 3, "b": 5, "sort": 1}])
+ self.assertRaises(ValueError, dataframe_functions._normalize_input_to_internal_representation, test_df, "id", "sort", None, None)
+
test_df = pd.DataFrame([{"id": 0}])
self.assertRaises(ValueError, dataframe_functions._normalize_input_to_internal_representation, test_df,
"id", None, None, None)
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_modified_files",
"has_pytest_match_arg"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 0,
"test_score": 2
},
"num_modified_files": 2
} | 0.12 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest>=4.4.0",
"pytest-cov>=2.6.1",
"pytest-xdist>=1.26.1",
"mock>=2.0.0",
"matplotlib>=2.0.0",
"seaborn>=0.7.1",
"ipython>=5.3.0",
"notebook>=4.4.1",
"pandas-datareader>=0.5.0",
"coveralls"
],
"pre_install": [
"apt-get update",
"apt-get install -y gcc"
],
"python": "3.5.2",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
async-generator==1.10
attrs==22.2.0
backcall==0.2.0
bleach==4.1.0
certifi==2021.5.30
cffi==1.15.1
charset-normalizer==2.0.12
click==8.0.4
cloudpickle==2.2.1
contextvars==2.4
coverage==6.2
coveralls==3.3.1
cycler==0.11.0
dask==2021.3.0
dataclasses==0.8
decorator==5.1.1
defusedxml==0.7.1
distributed==2021.3.0
docopt==0.6.2
entrypoints==0.4
execnet==1.9.0
future==1.0.0
HeapDict==1.0.1
idna==3.10
immutables==0.19
importlib-metadata==4.8.3
importlib-resources==5.4.0
iniconfig==1.1.1
ipykernel==5.5.6
ipython==7.16.3
ipython-genutils==0.2.0
jedi==0.17.2
Jinja2==3.0.3
joblib==1.1.1
jsonschema==3.2.0
jupyter-client==7.1.2
jupyter-core==4.9.2
jupyterlab-pygments==0.1.2
kiwisolver==1.3.1
lxml==5.3.1
MarkupSafe==2.0.1
matplotlib==3.3.4
mistune==0.8.4
mock==5.2.0
msgpack==1.0.5
nbclient==0.5.9
nbconvert==6.0.7
nbformat==5.1.3
nest-asyncio==1.6.0
notebook==6.4.10
numpy==1.19.5
packaging==21.3
pandas==0.23.4
pandas-datareader==0.10.0
pandocfilters==1.5.1
parso==0.7.1
patsy==1.0.1
pexpect==4.9.0
pickleshare==0.7.5
Pillow==8.4.0
pluggy==1.0.0
prometheus-client==0.17.1
prompt-toolkit==3.0.36
psutil==7.0.0
ptyprocess==0.7.0
py==1.11.0
pycparser==2.21
Pygments==2.14.0
pyparsing==3.1.4
pyrsistent==0.18.0
pytest==7.0.1
pytest-cov==4.0.0
pytest-xdist==3.0.2
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.1
pyzmq==25.1.2
requests==2.27.1
scikit-learn==0.24.2
scipy==1.5.4
seaborn==0.11.2
Send2Trash==1.8.3
six==1.17.0
sortedcontainers==2.4.0
statsmodels==0.12.2
tblib==1.7.0
terminado==0.12.1
testpath==0.6.0
threadpoolctl==3.1.0
tomli==1.2.3
toolz==0.12.0
tornado==6.1
tqdm==4.64.1
traitlets==4.3.3
-e git+https://github.com/blue-yonder/tsfresh.git@6309f82c104bf4e20648d8d2d2759b3f3162ae36#egg=tsfresh
typing_extensions==4.1.1
urllib3==1.26.20
wcwidth==0.2.13
webencodings==0.5.1
zict==2.1.0
zipp==3.6.0
| name: tsfresh
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- async-generator==1.10
- attrs==22.2.0
- backcall==0.2.0
- bleach==4.1.0
- cffi==1.15.1
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.2.1
- contextvars==2.4
- coverage==6.2
- coveralls==3.3.1
- cycler==0.11.0
- dask==2021.3.0
- dataclasses==0.8
- decorator==5.1.1
- defusedxml==0.7.1
- distributed==2021.3.0
- docopt==0.6.2
- entrypoints==0.4
- execnet==1.9.0
- future==1.0.0
- heapdict==1.0.1
- idna==3.10
- immutables==0.19
- importlib-metadata==4.8.3
- importlib-resources==5.4.0
- iniconfig==1.1.1
- ipykernel==5.5.6
- ipython==7.16.3
- ipython-genutils==0.2.0
- jedi==0.17.2
- jinja2==3.0.3
- joblib==1.1.1
- jsonschema==3.2.0
- jupyter-client==7.1.2
- jupyter-core==4.9.2
- jupyterlab-pygments==0.1.2
- kiwisolver==1.3.1
- lxml==5.3.1
- markupsafe==2.0.1
- matplotlib==3.3.4
- mistune==0.8.4
- mock==5.2.0
- msgpack==1.0.5
- nbclient==0.5.9
- nbconvert==6.0.7
- nbformat==5.1.3
- nest-asyncio==1.6.0
- notebook==6.4.10
- numpy==1.19.5
- packaging==21.3
- pandas==0.23.4
- pandas-datareader==0.10.0
- pandocfilters==1.5.1
- parso==0.7.1
- patsy==1.0.1
- pexpect==4.9.0
- pickleshare==0.7.5
- pillow==8.4.0
- pluggy==1.0.0
- prometheus-client==0.17.1
- prompt-toolkit==3.0.36
- psutil==7.0.0
- ptyprocess==0.7.0
- py==1.11.0
- pycparser==2.21
- pygments==2.14.0
- pyparsing==3.1.4
- pyrsistent==0.18.0
- pytest==7.0.1
- pytest-cov==4.0.0
- pytest-xdist==3.0.2
- python-dateutil==2.9.0.post0
- pytz==2025.2
- pyyaml==6.0.1
- pyzmq==25.1.2
- requests==2.27.1
- scikit-learn==0.24.2
- scipy==1.5.4
- seaborn==0.11.2
- send2trash==1.8.3
- six==1.17.0
- sortedcontainers==2.4.0
- statsmodels==0.12.2
- tblib==1.7.0
- terminado==0.12.1
- testpath==0.6.0
- threadpoolctl==3.1.0
- tomli==1.2.3
- toolz==0.12.0
- tornado==6.1
- tqdm==4.64.1
- traitlets==4.3.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- wcwidth==0.2.13
- webencodings==0.5.1
- zict==2.1.0
- zipp==3.6.0
prefix: /opt/conda/envs/tsfresh
| [
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_wrong_input"
] | [] | [
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_assert_single_row",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_df_1",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_df_2",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_dict_rolling_maxshift_1",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_stacked_rolling",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_df_3",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_with_wrong_input",
"tests/units/utilities/test_dataframe_functions.py::CheckForNanTestCase::test_all_columns",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_warning_on_non_uniform_time_steps",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_wide_dataframe_order_preserved",
"tests/units/utilities/test_dataframe_functions.py::GetRangeValuesPerColumnTestCase::test_ignores_non_finite_values",
"tests/units/utilities/test_dataframe_functions.py::RestrictTestCase::test_restrict_wrong",
"tests/units/utilities/test_dataframe_functions.py::GetRangeValuesPerColumnTestCase::test_range_values_correct_with_uneven_length",
"tests/units/utilities/test_dataframe_functions.py::GetRangeValuesPerColumnTestCase::test_range_values_correct_with_even_length",
"tests/units/utilities/test_dataframe_functions.py::GetRangeValuesPerColumnTestCase::test_no_finite_values_yields_0",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_dictionaries_one_row",
"tests/units/utilities/test_dataframe_functions.py::RestrictTestCase::test_restrict_dataframe",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_dict_rolling",
"tests/units/utilities/test_dataframe_functions.py::CheckForNanTestCase::test_not_all_columns",
"tests/units/utilities/test_dataframe_functions.py::RestrictTestCase::test_restrict_dict",
"tests/units/utilities/test_dataframe_functions.py::GetIDsTestCase::test_get_id__correct_dict",
"tests/units/utilities/test_dataframe_functions.py::MakeForecastingFrameTestCase::test_make_forecasting_frame_list",
"tests/units/utilities/test_dataframe_functions.py::GetIDsTestCase::test_get_id__correct_DataFrame",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_wide_dataframe_order_preserved_with_sort_column",
"tests/units/utilities/test_dataframe_functions.py::ImputeTestCase::test_impute_zero",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_dictionaries_two_rows_sorted",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_positive_rolling",
"tests/units/utilities/test_dataframe_functions.py::ImputeTestCase::test_impute_range",
"tests/units/utilities/test_dataframe_functions.py::MakeForecastingFrameTestCase::test_make_forecasting_frame_range",
"tests/units/utilities/test_dataframe_functions.py::ImputeTestCase::test_toplevel_impute",
"tests/units/utilities/test_dataframe_functions.py::MakeForecastingFrameTestCase::test_make_forecasting_frame_pdSeries",
"tests/units/utilities/test_dataframe_functions.py::RollingTestCase::test_negative_rolling",
"tests/units/utilities/test_dataframe_functions.py::NormalizeTestCase::test_with_dictionaries_two_rows"
] | [] | MIT License | 5,821 | 364 | [
"setup.py",
"tsfresh/utilities/dataframe_functions.py"
] |
mhe__pynrrd-103 | b500fbce20b4c7126587714b4a3bcefe9fa1d645 | 2019-11-16 17:18:04 | 4a26224b1b6fbd0acfbe6daa14d8e7b3d20328a1 | codecov-io: # [Codecov](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=h1) Report
> Merging [#103](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=desc) into [master](https://codecov.io/gh/mhe/pynrrd/commit/b500fbce20b4c7126587714b4a3bcefe9fa1d645?src=pr&el=desc) will **increase** coverage by `0.26%`.
> The diff coverage is `75%`.
[](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #103 +/- ##
==========================================
+ Coverage 99.47% 99.73% +0.26%
==========================================
Files 6 6
Lines 381 383 +2
Branches 123 124 +1
==========================================
+ Hits 379 382 +3
+ Misses 1 0 -1
Partials 1 1
```
| [Impacted Files](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [nrrd/writer.py](https://codecov.io/gh/mhe/pynrrd/pull/103/diff?src=pr&el=tree#diff-bnJyZC93cml0ZXIucHk=) | `99.19% <75%> (+0.83%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=footer). Last update [b500fbc...abaad66](https://codecov.io/gh/mhe/pynrrd/pull/103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/nrrd/writer.py b/nrrd/writer.py
index b9187f7..24d78b0 100644
--- a/nrrd/writer.py
+++ b/nrrd/writer.py
@@ -180,6 +180,11 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
if 'encoding' not in header:
header['encoding'] = 'gzip'
+ # If 'datafile' is specified, then we rename to 'data file'
+ # The standard seems to advocate for 'data file' OVER 'datafile'
+ if 'datafile' in header:
+ header['data file'] = header.pop('datafile')
+
# A bit of magic in handling options here.
# If *.nhdr filename provided, this overrides `detached_header=False`
# If *.nrrd filename provided AND detached_header=True, separate header and data files written.
@@ -188,7 +193,9 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
if filename.endswith('.nhdr'):
detached_header = True
- if 'data file' not in header:
+ # TODO This will cause issues for relative data files because it will not save in the correct spot
+ data_filename = header.get('datafile', None)
+ if not data_filename:
# Get the base filename without the extension
base_filename = os.path.splitext(filename)[0]
@@ -207,9 +214,6 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
header['data file'] = os.path.basename(data_filename) \
if relative_data_path else os.path.abspath(data_filename)
- else:
- # TODO This will cause issues for relative data files because it will not save in the correct spot
- data_filename = header['data file']
elif filename.endswith('.nrrd') and detached_header:
data_filename = filename
header['data file'] = os.path.basename(data_filename) \
| Need to account for both 'datafile' and 'data file'
We should account for both 'data file' and 'datafile' in [Ln 190](https://github.com/mhe/pynrrd/blob/master/nrrd/writer.py#L190) and [Ln 211](https://github.com/mhe/pynrrd/blob/master/nrrd/writer.py#L211) just like:
https://github.com/mhe/pynrrd/blob/master/nrrd/reader.py#L354 | mhe/pynrrd | diff --git a/nrrd/tests/test_writing.py b/nrrd/tests/test_writing.py
index 535b75d..7c0c1c7 100644
--- a/nrrd/tests/test_writing.py
+++ b/nrrd/tests/test_writing.py
@@ -310,6 +310,26 @@ class TestWritingFunctions(object):
self.assertTrue('space units: "mm" "cm" "in"' in lines)
self.assertTrue('labels: "X" "Y" "f(log(X, 10), Y)"' in lines)
+ def test_write_detached_datafile_check(self):
+ output_filename = os.path.join(self.temp_write_dir, 'testfile_detached.nhdr')
+
+ nrrd.write(output_filename, self.data_input, {'datafile': 'testfile_detached.gz'}, detached_header=True,
+ index_order=self.index_order)
+
+ # Read back the same file
+ data, header = nrrd.read(output_filename, index_order=self.index_order)
+ self.assertEqual(header['data file'], 'testfile_detached.raw.gz')
+
+ def test_write_detached_datafile_check2(self):
+ output_filename = os.path.join(self.temp_write_dir, 'testfile_detached.nhdr')
+
+ nrrd.write(output_filename, self.data_input, {'data file': 'testfile_detached.gz'}, detached_header=True,
+ index_order=self.index_order)
+
+ # Read back the same file
+ data, header = nrrd.read(output_filename, index_order=self.index_order)
+ self.assertEqual(header['data file'], 'testfile_detached.raw.gz')
+
class TestWritingFunctionsFortran(TestWritingFunctions, unittest.TestCase):
index_order = 'F'
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_short_problem_statement"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 1,
"test_score": 0
},
"num_modified_files": 1
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.6",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
docutils==0.18.1
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
numpy==1.19.5
numpydoc==1.1.0
packaging==21.3
pluggy==1.0.0
py==1.11.0
Pygments==2.14.0
-e git+https://github.com/mhe/pynrrd.git@b500fbce20b4c7126587714b4a3bcefe9fa1d645#egg=pynrrd
pyparsing==3.1.4
pytest==7.0.1
pytz==2025.2
requests==2.27.1
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: pynrrd
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- docutils==0.18.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- numpy==1.19.5
- numpydoc==1.1.0
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytz==2025.2
- requests==2.27.1
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/pynrrd
| [
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_datafile_check2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_datafile_check2"
] | [
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_raw",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_datafile_check2"
] | [
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_raw"
] | [] | MIT License | 5,822 | 468 | [
"nrrd/writer.py"
] |
mhe__pynrrd-104 | 6faf838d36094955adafc6c60c8e115dff2f2939 | 2019-11-16 18:06:48 | 4a26224b1b6fbd0acfbe6daa14d8e7b3d20328a1 | codecov-io: # [Codecov](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=h1) Report
> Merging [#104](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=desc) into [master](https://codecov.io/gh/mhe/pynrrd/commit/6faf838d36094955adafc6c60c8e115dff2f2939?src=pr&el=desc) will **increase** coverage by `0.26%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #104 +/- ##
========================================
+ Coverage 99.73% 100% +0.26%
========================================
Files 6 6
Lines 383 385 +2
Branches 124 125 +1
========================================
+ Hits 382 385 +3
+ Partials 1 0 -1
```
| [Impacted Files](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [nrrd/writer.py](https://codecov.io/gh/mhe/pynrrd/pull/104/diff?src=pr&el=tree#diff-bnJyZC93cml0ZXIucHk=) | `100% <100%> (+0.8%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=footer). Last update [6faf838...855aec4](https://codecov.io/gh/mhe/pynrrd/pull/104?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
| diff --git a/nrrd/writer.py b/nrrd/writer.py
index 24d78b0..f459451 100644
--- a/nrrd/writer.py
+++ b/nrrd/writer.py
@@ -109,9 +109,9 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
.. note::
The following fields are automatically generated based on the :obj:`data` parameter ignoring these values
- in the :obj:`header`: 'type', 'endian', 'dimension', 'sizes'. In addition, the generated fields will be
- added to the given :obj:`header`. Thus, one can check the generated fields by viewing the passed
- :obj:`header`.
+ in the :obj:`header`: 'type', 'endian', 'dimension', 'sizes', and 'data file'. In addition, the generated
+ fields will be added to the given :obj:`header`. Thus, one can check the generated fields by viewing the
+ passed :obj:`header`.
.. note::
The default encoding field used if not specified in :obj:`header` is 'gzip'.
@@ -129,8 +129,10 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
Filename of the NRRD file
data : :class:`numpy.ndarray`
Data to save to the NRRD file
- detached_header : :obj:`bool`, optional
- Whether the header and data should be saved in separate files. Defaults to :obj:`False`
+ detached_header : :obj:`bool` or :obj:`str`, optional
+ Whether the header and data should be saved in separate files. Defaults to :obj:`False`. If a :obj:`str` is
+ given this specifies the path to the datafile. This path will ONLY be used if the given filename ends with nhdr
+ (i.e. the file is a header)
relative_data_path : :class:`bool`
Whether the data filename in detached header is saved with a relative path or absolute path.
This parameter is ignored if there is no detached header. Defaults to :obj:`True`
@@ -180,22 +182,24 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
if 'encoding' not in header:
header['encoding'] = 'gzip'
- # If 'datafile' is specified, then we rename to 'data file'
- # The standard seems to advocate for 'data file' OVER 'datafile'
+ # Remove detached data filename from the header
if 'datafile' in header:
- header['data file'] = header.pop('datafile')
+ header.pop('datafile')
+
+ if 'data file' in header:
+ header.pop('data file')
# A bit of magic in handling options here.
# If *.nhdr filename provided, this overrides `detached_header=False`
# If *.nrrd filename provided AND detached_header=True, separate header and data files written.
- # If detached_header=True and data file is present, then write the files separately
# For all other cases, header & data written to same file.
if filename.endswith('.nhdr'):
- detached_header = True
-
- # TODO This will cause issues for relative data files because it will not save in the correct spot
- data_filename = header.get('datafile', None)
- if not data_filename:
+ if isinstance(detached_header, str):
+ # Utilize the detached_header if a string was given as the path
+ # Note: An absolute path is obtained and assumed to be relative to the current path of the running Python
+ # program
+ data_filename = os.path.abspath(detached_header)
+ else:
# Get the base filename without the extension
base_filename = os.path.splitext(filename)[0]
@@ -212,13 +216,17 @@ def write(filename, data, header=None, detached_header=False, relative_data_path
else:
raise NRRDError('Invalid encoding specification while writing NRRD file: %s' % header['encoding'])
- header['data file'] = os.path.basename(data_filename) \
- if relative_data_path else os.path.abspath(data_filename)
+ # Update the data file field in the header with the path of the detached data
+ # TODO This will cause problems when the user specifies a relative data path and gives a custom path OUTSIDE
+ # of the current directory.
+ header['data file'] = os.path.basename(data_filename) \
+ if relative_data_path else os.path.abspath(data_filename)
+ detached_header = True
elif filename.endswith('.nrrd') and detached_header:
data_filename = filename
+ filename = '%s.nhdr' % os.path.splitext(filename)[0]
header['data file'] = os.path.basename(data_filename) \
if relative_data_path else os.path.abspath(data_filename)
- filename = '%s.nhdr' % os.path.splitext(filename)[0]
else:
# Write header & data as one file
data_filename = filename
| Modify 'data file' whenever detached_header=True
We should omit the if-else below:
https://github.com/mhe/pynrrd/blob/master/nrrd/writer.py#L190
Omitting the condition should also address the TODO below:
https://github.com/mhe/pynrrd/blob/master/nrrd/writer.py#L210
I experienced this while modifying an NHDR and saving it as another NHDR. In this case, the modified detached data is not saved where my modified NHDR is.
Correction:
Detached data is saved in the working directory. | mhe/pynrrd | diff --git a/nrrd/tests/test_writing.py b/nrrd/tests/test_writing.py
index 7c0c1c7..7da1019 100644
--- a/nrrd/tests/test_writing.py
+++ b/nrrd/tests/test_writing.py
@@ -313,7 +313,7 @@ class TestWritingFunctions(object):
def test_write_detached_datafile_check(self):
output_filename = os.path.join(self.temp_write_dir, 'testfile_detached.nhdr')
- nrrd.write(output_filename, self.data_input, {'datafile': 'testfile_detached.gz'}, detached_header=True,
+ nrrd.write(output_filename, self.data_input, {'datafile': 'testfile_detachedWRONG.gz'}, detached_header=True,
index_order=self.index_order)
# Read back the same file
@@ -323,13 +323,36 @@ class TestWritingFunctions(object):
def test_write_detached_datafile_check2(self):
output_filename = os.path.join(self.temp_write_dir, 'testfile_detached.nhdr')
- nrrd.write(output_filename, self.data_input, {'data file': 'testfile_detached.gz'}, detached_header=True,
+ nrrd.write(output_filename, self.data_input, {'data file': 'testfile_detachedWRONG.gz'}, detached_header=True,
index_order=self.index_order)
# Read back the same file
data, header = nrrd.read(output_filename, index_order=self.index_order)
self.assertEqual(header['data file'], 'testfile_detached.raw.gz')
+ def test_write_detached_datafile_custom_name(self):
+ output_filename = os.path.join(self.temp_write_dir, 'testfile_detached.nhdr')
+ # Specify a custom path to write the
+ output_header_filename = os.path.join(self.temp_write_dir, 'testfile_detachedDifferent.gz')
+
+ nrrd.write(output_filename, self.data_input, detached_header=output_header_filename,
+ index_order=self.index_order)
+
+ # Read back the same file
+ data, header = nrrd.read(output_filename, index_order=self.index_order)
+ self.assertEqual(header['data file'], 'testfile_detachedDifferent.gz')
+
+ def test_write_check_remove_datafile(self):
+ output_filename = os.path.join(self.temp_write_dir, 'testfile.nrrd')
+
+ nrrd.write(output_filename, self.data_input, {'data file': 'testfile_detached.gz'}, detached_header=False,
+ index_order=self.index_order)
+
+ # Read back the same file
+ # The 'data file' parameter should be missing since this is NOT a detached file
+ data, header = nrrd.read(output_filename, index_order=self.index_order)
+ self.assertFalse('data file' in header)
+
class TestWritingFunctionsFortran(TestWritingFunctions, unittest.TestCase):
index_order = 'F'
| {
"commit_name": "merge_commit",
"failed_lite_validators": [
"has_many_hunks"
],
"has_test_patch": true,
"is_lite": false,
"llm_score": {
"difficulty_score": 1,
"issue_text_score": 2,
"test_score": 2
},
"num_modified_files": 1
} | 0.4 | {
"env_vars": null,
"env_yml_path": null,
"install": "pip install -e .",
"log_parser": "parse_log_pytest",
"no_use_env": null,
"packages": "requirements.txt",
"pip_packages": [
"pytest"
],
"pre_install": null,
"python": "3.4",
"reqs_path": [
"requirements.txt"
],
"test_cmd": "pytest --no-header -rA --tb=line --color=no -p no:cacheprovider -W ignore::DeprecationWarning"
} | alabaster==0.7.13
attrs==22.2.0
Babel==2.11.0
certifi==2021.5.30
charset-normalizer==2.0.12
docutils==0.18.1
idna==3.10
imagesize==1.4.1
importlib-metadata==4.8.3
iniconfig==1.1.1
Jinja2==3.0.3
MarkupSafe==2.0.1
numpy==1.19.5
numpydoc==1.1.0
packaging==21.3
pluggy==1.0.0
py==1.11.0
Pygments==2.14.0
-e git+https://github.com/mhe/pynrrd.git@6faf838d36094955adafc6c60c8e115dff2f2939#egg=pynrrd
pyparsing==3.1.4
pytest==7.0.1
pytz==2025.2
requests==2.27.1
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tomli==1.2.3
typing_extensions==4.1.1
urllib3==1.26.20
zipp==3.6.0
| name: pynrrd
channels:
- defaults
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
- conda-forge
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- ca-certificates=2025.2.25=h06a4308_0
- certifi=2021.5.30=py36h06a4308_0
- ld_impl_linux-64=2.40=h12ee557_0
- libffi=3.3=he6710b0_2
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- ncurses=6.4=h6a678d5_0
- openssl=1.1.1w=h7f8727e_0
- pip=21.2.2=py36h06a4308_0
- python=3.6.13=h12debd9_1
- readline=8.2=h5eee18b_0
- setuptools=58.0.4=py36h06a4308_0
- sqlite=3.45.3=h5eee18b_0
- tk=8.6.14=h39e8969_0
- wheel=0.37.1=pyhd3eb1b0_0
- xz=5.6.4=h5eee18b_1
- zlib=1.2.13=h5eee18b_1
- pip:
- alabaster==0.7.13
- attrs==22.2.0
- babel==2.11.0
- charset-normalizer==2.0.12
- docutils==0.18.1
- idna==3.10
- imagesize==1.4.1
- importlib-metadata==4.8.3
- iniconfig==1.1.1
- jinja2==3.0.3
- markupsafe==2.0.1
- numpy==1.19.5
- numpydoc==1.1.0
- packaging==21.3
- pluggy==1.0.0
- py==1.11.0
- pygments==2.14.0
- pyparsing==3.1.4
- pytest==7.0.1
- pytz==2025.2
- requests==2.27.1
- snowballstemmer==2.2.0
- sphinx==5.3.0
- sphinxcontrib-applehelp==1.0.2
- sphinxcontrib-devhelp==1.0.2
- sphinxcontrib-htmlhelp==2.0.0
- sphinxcontrib-jsmath==1.0.1
- sphinxcontrib-qthelp==1.0.3
- sphinxcontrib-serializinghtml==1.1.5
- tomli==1.2.3
- typing-extensions==4.1.1
- urllib3==1.26.20
- zipp==3.6.0
prefix: /opt/conda/envs/pynrrd
| [
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_check_remove_datafile",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_datafile_custom_name",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_check_remove_datafile",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_datafile_custom_name"
] | [
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_raw",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_datafile_check2",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_detached_datafile_custom_name",
"nrrd/tests/test_writing.py::TestWritingFunctions::test_write_check_remove_datafile"
] | [
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_datafile_check2",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsFortran::test_write_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_invalid_custom_field",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_invalid_index_order",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_quoted_string_list_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_remove_endianness",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_unsupported_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_1d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_2d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_ascii_3d",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_bzip2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_bzip2_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_custom_fields_with_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_custom_fields_without_custom_field_map",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_default_header",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_ascii",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_bz2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_datafile_check",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_datafile_check2",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw_as_nrrd",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_detached_raw_odd_extension",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_fake_encoding",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_gz",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_gz_level1",
"nrrd/tests/test_writing.py::TestWritingFunctionsC::test_write_raw"
] | [] | MIT License | 5,823 | 1,158 | [
"nrrd/writer.py"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.