status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,135 |
helpers contains deprecated call to be removed in 2.12
|
##### SUMMARY
helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/helpers.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74135
|
https://github.com/ansible/ansible/pull/74809
|
27f61db86b69743181529dd6ee34951b244e075e
|
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
| 2021-04-05T20:33:57Z |
python
| 2021-05-25T15:35:17Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py pylint:ansible-deprecated-version
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/compat/selinux.py import-2.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:ansible-deprecated-version
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/plugins/action/__init__.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/async_status.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/inventory/script.py pylint:ansible-deprecated-version
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,135 |
helpers contains deprecated call to be removed in 2.12
|
##### SUMMARY
helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/helpers.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74135
|
https://github.com/ansible/ansible/pull/74809
|
27f61db86b69743181529dd6ee34951b244e075e
|
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
| 2021-04-05T20:33:57Z |
python
| 2021-05-25T15:35:17Z |
test/units/playbook/test_helpers.py
|
# (c) 2016, Adrian Likins <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from units.compat import unittest
from units.compat.mock import MagicMock
from units.mock.loader import DictDataLoader
from ansible import errors
from ansible.playbook.block import Block
from ansible.playbook.handler import Handler
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role.include import RoleInclude
from ansible.playbook import helpers
class MixinForMocks(object):
def _setup(self):
# This is not a very good mixin, lots of side effects
self.fake_loader = DictDataLoader({'include_test.yml': "",
'other_include_test.yml': ""})
self.mock_tqm = MagicMock(name='MockTaskQueueManager')
self.mock_play = MagicMock(name='MockPlay')
self.mock_play._attributes = []
self.mock_play.collections = None
self.mock_iterator = MagicMock(name='MockIterator')
self.mock_iterator._play = self.mock_play
self.mock_inventory = MagicMock(name='MockInventory')
self.mock_inventory._hosts_cache = dict()
def _get_host(host_name):
return None
self.mock_inventory.get_host.side_effect = _get_host
# TODO: can we use a real VariableManager?
self.mock_variable_manager = MagicMock(name='MockVariableManager')
self.mock_variable_manager.get_vars.return_value = dict()
self.mock_block = MagicMock(name='MockBlock')
# On macOS /etc is actually /private/etc, tests fail when performing literal /etc checks
self.fake_role_loader = DictDataLoader({os.path.join(os.path.realpath("/etc"), "ansible/roles/bogus_role/tasks/main.yml"): """
- shell: echo 'hello world'
"""})
self._test_data_path = os.path.dirname(__file__)
self.fake_include_loader = DictDataLoader({"/dev/null/includes/test_include.yml": """
- include: other_test_include.yml
- shell: echo 'hello world'
""",
"/dev/null/includes/static_test_include.yml": """
- include: other_test_include.yml
- shell: echo 'hello static world'
""",
"/dev/null/includes/other_test_include.yml": """
- debug:
msg: other_test_include_debug
"""})
class TestLoadListOfTasks(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def _assert_is_task_list(self, results):
for result in results:
self.assertIsInstance(result, Task)
def _assert_is_task_list_or_blocks(self, results):
self.assertIsInstance(results, list)
for result in results:
self.assertIsInstance(result, (Task, Block))
def test_ds_not_list(self):
ds = {}
self.assertRaises(AssertionError, helpers.load_list_of_tasks,
ds, self.mock_play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_ds_not_dict(self):
ds = [[]]
self.assertRaises(AssertionError, helpers.load_list_of_tasks,
ds, self.mock_play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_empty_task(self):
ds = [{}]
self.assertRaisesRegexp(errors.AnsibleParserError,
"no module/action detected in task",
helpers.load_list_of_tasks,
ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_empty_task_use_handlers(self):
ds = [{}]
self.assertRaisesRegexp(errors.AnsibleParserError,
"no module/action detected in task.",
helpers.load_list_of_tasks,
ds,
use_handlers=True,
play=self.mock_play,
variable_manager=self.mock_variable_manager,
loader=self.fake_loader)
def test_one_bogus_block(self):
ds = [{'block': None}]
self.assertRaisesRegexp(errors.AnsibleParserError,
"A malformed block was encountered",
helpers.load_list_of_tasks,
ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_unknown_action(self):
action_name = 'foo_test_unknown_action'
ds = [{'action': action_name}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertEqual(res[0].action, action_name)
def test_block_unknown_action(self):
action_name = 'foo_test_block_unknown_action'
ds = [{
'block': [{'action': action_name}]
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self._assert_default_block(res[0])
def _assert_default_block(self, block):
# the expected defaults
self.assertIsInstance(block.block, list)
self.assertEqual(len(block.block), 1)
self.assertIsInstance(block.rescue, list)
self.assertEqual(len(block.rescue), 0)
self.assertIsInstance(block.always, list)
self.assertEqual(len(block.always), 0)
def test_block_unknown_action_use_handlers(self):
ds = [{
'block': [{'action': 'foo_test_block_unknown_action'}]
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self._assert_default_block(res[0])
def test_one_bogus_block_use_handlers(self):
ds = [{'block': True}]
self.assertRaisesRegexp(errors.AnsibleParserError,
"A malformed block was encountered",
helpers.load_list_of_tasks,
ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_one_bogus_include(self):
ds = [{'include': 'somefile.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_bogus_include_use_handlers(self):
ds = [{'include': 'somefile.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_bogus_include_static(self):
ds = [{'include': 'somefile.yml',
'static': 'true'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_include(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self.assertEqual(len(res), 1)
self._assert_is_task_list_or_blocks(res)
def test_one_parent_include(self):
ds = [{'include': '/dev/null/includes/test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIsInstance(res[0]._parent, TaskInclude)
# TODO/FIXME: do this non deprecated way
def test_one_include_tags(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml',
'tags': ['test_one_include_tags_tag1', 'and_another_tagB']
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIn('test_one_include_tags_tag1', res[0].tags)
self.assertIn('and_another_tagB', res[0].tags)
# TODO/FIXME: do this non deprecated way
def test_one_parent_include_tags(self):
ds = [{'include': '/dev/null/includes/test_include.yml',
# 'vars': {'tags': ['test_one_parent_include_tags_tag1', 'and_another_tag2']}
'tags': ['test_one_parent_include_tags_tag1', 'and_another_tag2']
}
]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIn('test_one_parent_include_tags_tag1', res[0].tags)
self.assertIn('and_another_tag2', res[0].tags)
# It would be useful to be able to tell what kind of deprecation we encountered and where we encountered it.
def test_one_include_tags_deprecated_mixed(self):
ds = [{'include': "/dev/null/includes/other_test_include.yml",
'vars': {'tags': "['tag_on_include1', 'tag_on_include2']"},
'tags': 'mixed_tag1, mixed_tag2'
}]
self.assertRaisesRegexp(errors.AnsibleParserError, 'Mixing styles',
helpers.load_list_of_tasks,
ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
def test_one_include_tags_deprecated_include(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml',
'vars': {'tags': ['include_tag1_deprecated', 'and_another_tagB_deprecated']}
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIn('include_tag1_deprecated', res[0].tags)
self.assertIn('and_another_tagB_deprecated', res[0].tags)
def test_one_include_use_handlers(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Handler)
def test_one_parent_include_use_handlers(self):
ds = [{'include': '/dev/null/includes/test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Handler)
# default for Handler
self.assertEqual(res[0].listen, [])
# TODO/FIXME: this doesn't seen right
# figure out how to get the non-static errors to be raised, this seems to just ignore everything
def test_one_include_not_static(self):
ds = [{
'include': '/dev/null/includes/static_test_include.yml',
'static': False
}]
# a_block = Block()
ti_ds = {'include': '/dev/null/includes/ssdftatic_test_include.yml'}
a_task_include = TaskInclude()
ti = a_task_include.load(ti_ds)
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
block=ti,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Task)
self.assertEqual(res[0].args['_raw_params'], '/dev/null/includes/static_test_include.yml')
# TODO/FIXME: This two get stuck trying to make a mock_block into a TaskInclude
# def test_one_include(self):
# ds = [{'include': 'other_test_include.yml'}]
# res = helpers.load_list_of_tasks(ds, play=self.mock_play,
# block=self.mock_block,
# variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
# print(res)
# def test_one_parent_include(self):
# ds = [{'include': 'test_include.yml'}]
# res = helpers.load_list_of_tasks(ds, play=self.mock_play,
# block=self.mock_block,
# variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
# print(res)
def test_one_bogus_include_role(self):
ds = [{'include_role': {'name': 'bogus_role'}, 'collections': []}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
block=self.mock_block,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertEqual(len(res), 1)
self._assert_is_task_list_or_blocks(res)
def test_one_bogus_include_role_use_handlers(self):
ds = [{'include_role': {'name': 'bogus_role'}, 'collections': []}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play, use_handlers=True,
block=self.mock_block,
variable_manager=self.mock_variable_manager,
loader=self.fake_role_loader)
self.assertEqual(len(res), 1)
self._assert_is_task_list_or_blocks(res)
class TestLoadListOfRoles(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def test_ds_not_list(self):
ds = {}
self.assertRaises(AssertionError, helpers.load_list_of_roles,
ds, self.mock_play)
def test_empty_role(self):
ds = [{}]
self.assertRaisesRegexp(errors.AnsibleError,
"role definitions must contain a role name",
helpers.load_list_of_roles,
ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
def test_empty_role_just_name(self):
ds = [{'name': 'bogus_role'}]
res = helpers.load_list_of_roles(ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertIsInstance(res, list)
for r in res:
self.assertIsInstance(r, RoleInclude)
def test_block_unknown_action(self):
ds = [{
'block': [{'action': 'foo_test_block_unknown_action'}]
}]
ds = [{'name': 'bogus_role'}]
res = helpers.load_list_of_roles(ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertIsInstance(res, list)
for r in res:
self.assertIsInstance(r, RoleInclude)
class TestLoadListOfBlocks(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def test_ds_not_list(self):
ds = {}
mock_play = MagicMock(name='MockPlay')
self.assertRaises(AssertionError, helpers.load_list_of_blocks,
ds, mock_play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_empty_block(self):
ds = [{}]
mock_play = MagicMock(name='MockPlay')
self.assertRaisesRegexp(errors.AnsibleParserError,
"no module/action detected in task",
helpers.load_list_of_blocks,
ds, mock_play,
parent_block=None,
role=None,
task_include=None,
use_handlers=False,
variable_manager=None,
loader=None)
def test_block_unknown_action(self):
ds = [{'action': 'foo', 'collections': []}]
mock_play = MagicMock(name='MockPlay')
res = helpers.load_list_of_blocks(ds, mock_play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None,
loader=None)
self.assertIsInstance(res, list)
for block in res:
self.assertIsInstance(block, Block)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,143 |
script contains deprecated call to be removed in 2.12
|
##### SUMMARY
script contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/inventory/script.py:97:12: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/inventory/script.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74143
|
https://github.com/ansible/ansible/pull/74813
|
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
| 2021-04-05T20:34:06Z |
python
| 2021-05-25T15:35:46Z |
changelogs/fragments/74143-remove-script-cache.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,143 |
script contains deprecated call to be removed in 2.12
|
##### SUMMARY
script contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/inventory/script.py:97:12: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/inventory/script.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74143
|
https://github.com/ansible/ansible/pull/74813
|
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
| 2021-04-05T20:34:06Z |
python
| 2021-05-25T15:35:46Z |
lib/ansible/plugins/inventory/script.py
|
# Copyright (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: script
version_added: "2.4"
short_description: Executes an inventory script that returns JSON
options:
cache:
deprecated:
why: This option has never been in use. External scripts must implement their own caching.
version: "2.12"
description:
- This option has no effect. The plugin will not cache results because external inventory scripts
are responsible for their own caching. This option will be removed in 2.12.
ini:
- section: inventory_plugin_script
key: cache
env:
- name: ANSIBLE_INVENTORY_PLUGIN_SCRIPT_CACHE
always_show_stderr:
description: Toggle display of stderr even when script was successful
version_added: "2.5.1"
default: True
type: boolean
ini:
- section: inventory_plugin_script
key: always_show_stderr
env:
- name: ANSIBLE_INVENTORY_PLUGIN_SCRIPT_STDERR
description:
- The source provided must be an executable that returns Ansible inventory JSON
- The source must accept C(--list) and C(--host <hostname>) as arguments.
C(--host) will only be used if no C(_meta) key is present.
This is a performance optimization as the script would be called per host otherwise.
notes:
- Whitelisted in configuration by default.
- The plugin does not cache results because external inventory scripts are responsible for their own caching.
'''
import os
import subprocess
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.basic import json_dict_bytes_to_unicode
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
from ansible.utils.display import Display
display = Display()
class InventoryModule(BaseInventoryPlugin, Cacheable):
''' Host inventory parser for ansible using external inventory scripts. '''
NAME = 'script'
def __init__(self):
super(InventoryModule, self).__init__()
self._hosts = set()
def verify_file(self, path):
''' Verify if file is usable by this plugin, base does minimal accessibility check '''
valid = super(InventoryModule, self).verify_file(path)
if valid:
# not only accessible, file must be executable and/or have shebang
shebang_present = False
try:
with open(path, 'rb') as inv_file:
initial_chars = inv_file.read(2)
if initial_chars.startswith(b'#!'):
shebang_present = True
except Exception:
pass
if not os.access(path, os.X_OK) and not shebang_present:
valid = False
return valid
def parse(self, inventory, loader, path, cache=None):
super(InventoryModule, self).parse(inventory, loader, path)
self.set_options()
if self.get_option('cache') is not None:
display.deprecated(
msg="The 'cache' option is deprecated for the script inventory plugin. "
"External scripts implement their own caching and this option has never been used",
version="2.12", collection_name='ansible.builtin'
)
# Support inventory scripts that are not prefixed with some
# path information but happen to be in the current working
# directory when '.' is not in PATH.
cmd = [path, "--list"]
try:
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleParserError("problem running %s (%s)" % (' '.join(cmd), to_native(e)))
(stdout, stderr) = sp.communicate()
path = to_native(path)
err = to_native(stderr or "")
if err and not err.endswith('\n'):
err += '\n'
if sp.returncode != 0:
raise AnsibleError("Inventory script (%s) had an execution error: %s " % (path, err))
# make sure script output is unicode so that json loader will output unicode strings itself
try:
data = to_text(stdout, errors="strict")
except Exception as e:
raise AnsibleError("Inventory {0} contained characters that cannot be interpreted as UTF-8: {1}".format(path, to_native(e)))
try:
processed = self.loader.load(data, json_only=True)
except Exception as e:
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}\n{2}".format(path, to_native(e), err))
# if no other errors happened and you want to force displaying stderr, do so now
if stderr and self.get_option('always_show_stderr'):
self.display.error(msg=to_text(err))
if not isinstance(processed, Mapping):
raise AnsibleError("failed to parse executable inventory script results from {0}: needs to be a json dict\n{1}".format(path, err))
group = None
data_from_meta = None
# A "_meta" subelement may contain a variable "hostvars" which contains a hash for each host
# if this "hostvars" exists at all then do not call --host for each # host.
# This is for efficiency and scripts should still return data
# if called with --host for backwards compat with 1.2 and earlier.
for (group, gdata) in processed.items():
if group == '_meta':
if 'hostvars' in gdata:
data_from_meta = gdata['hostvars']
else:
self._parse_group(group, gdata)
for host in self._hosts:
got = {}
if data_from_meta is None:
got = self.get_host_variables(path, host)
else:
try:
got = data_from_meta.get(host, {})
except AttributeError as e:
raise AnsibleError("Improperly formatted host information for %s: %s" % (host, to_native(e)), orig_exc=e)
self._populate_host_vars([host], got)
except Exception as e:
raise AnsibleParserError(to_native(e))
def _parse_group(self, group, data):
group = self.inventory.add_group(group)
if not isinstance(data, dict):
data = {'hosts': data}
# is not those subkeys, then simplified syntax, host with vars
elif not any(k in data for k in ('hosts', 'vars', 'children')):
data = {'hosts': [group], 'vars': data}
if 'hosts' in data:
if not isinstance(data['hosts'], list):
raise AnsibleError("You defined a group '%s' with bad data for the host list:\n %s" % (group, data))
for hostname in data['hosts']:
self._hosts.add(hostname)
self.inventory.add_host(hostname, group)
if 'vars' in data:
if not isinstance(data['vars'], dict):
raise AnsibleError("You defined a group '%s' with bad data for variables:\n %s" % (group, data))
for k, v in iteritems(data['vars']):
self.inventory.set_variable(group, k, v)
if group != '_meta' and isinstance(data, dict) and 'children' in data:
for child_name in data['children']:
child_name = self.inventory.add_group(child_name)
self.inventory.add_child(group, child_name)
def get_host_variables(self, path, host):
""" Runs <script> --host <hostname>, to determine additional host variables """
cmd = [path, "--host", host]
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("problem running %s (%s)" % (' '.join(cmd), e))
(out, err) = sp.communicate()
if out.strip() == '':
return {}
try:
return json_dict_bytes_to_unicode(self.loader.load(out, file_name=path))
except ValueError:
raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,143 |
script contains deprecated call to be removed in 2.12
|
##### SUMMARY
script contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/inventory/script.py:97:12: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/inventory/script.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74143
|
https://github.com/ansible/ansible/pull/74813
|
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
| 2021-04-05T20:34:06Z |
python
| 2021-05-25T15:35:46Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py pylint:ansible-deprecated-version
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/compat/selinux.py import-2.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/plugins/action/__init__.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/async_status.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/inventory/script.py pylint:ansible-deprecated-version
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,783 |
AnsibleModule.run_command intermittently raises KeyError due to thread-unsafe os.environ mutation on Python 2 targets
|
### Summary
Within `AnsibleModule.run_command()`, prior to calling `subprocess.Popen`, it uses `os.environ.get()` to read some environment variables:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L1947
After the `Popen` call, it deletes some environment variables that were added using `del os.environ[]`:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L2091
Unfortunately, this is not thread-safe when Ansible hits Python 2 targets (eg. CentOS 7). The Python 2 standard library implements `.get()` non-atomically:
```python
def get(self, key, failobj=None):
if key not in self:
return failobj
return self[key]
```
This surfaces in Ansible primarily during the fact collection process in `module_utils/facts/hardware/linux.py`, where several threads in a `multiprocessing.pool.ThreadPool` call `run_process()` to collect block device information. When the race condition is hit, a `KeyError` is raised, breaking the mount point fact gathering and leaving the facts unavailable.
In our environment, when running against 1000+ CentOS 7 hosts, this race condition will be intermittently hit on only 1-2 hosts, making this difficult to troubleshoot. Because it's very hard to reproduce with fewer hosts, I created a script to demonstrate how the `os.environ` mutation is problematic: https://gist.github.com/agunnerson-ibm/b7a956d05095c5fad885295e2e31c9a4
Is it possible to refactor `run_command()` so that it makes a copy of `os.environ`, mutates the copy, and then passes it to `subprocess.Popen` via the `env=` kwarg? I believe that would fix this race condition and would remove the current implementation's need to back up and restore environment variables.
### Issue Type
Bug Report
### Component Name
module_utils/basic
### Ansible Version
```console
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/jenkins/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 29 2020, 17:03:31) [GCC 9.2.0]
jinja version = 2.11.1
libyaml = False
```
### Configuration
```console
(none)
```
### OS / Environment
Controller:
* Alpine 3.11.2
* Python 3.8
Target hosts:
* CentOS 7.9
* Python 2.7
### Steps to Reproduce
* Run an empty playbook (in debug mode) that only gathers facts against a very large number of hosts.
* Intermittently, on a small number of hosts, mount point fact gathering will fail due to `KeyError: 'LC_ALL'`
### Expected Results
Fact gathering should not fail due to multi-threaded `os.environ` mutation
### Actual Results
Mount point fact gathering fails and mount point facts are not available
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74783
|
https://github.com/ansible/ansible/pull/74791
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
|
98138584b7d0a3edd14a80650e2d169396fa51cc
| 2021-05-20T17:42:04Z |
python
| 2021-05-25T16:00:47Z |
changelogs/fragments/74783-run-command-thread-safety.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,783 |
AnsibleModule.run_command intermittently raises KeyError due to thread-unsafe os.environ mutation on Python 2 targets
|
### Summary
Within `AnsibleModule.run_command()`, prior to calling `subprocess.Popen`, it uses `os.environ.get()` to read some environment variables:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L1947
After the `Popen` call, it deletes some environment variables that were added using `del os.environ[]`:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L2091
Unfortunately, this is not thread-safe when Ansible hits Python 2 targets (eg. CentOS 7). The Python 2 standard library implements `.get()` non-atomically:
```python
def get(self, key, failobj=None):
if key not in self:
return failobj
return self[key]
```
This surfaces in Ansible primarily during the fact collection process in `module_utils/facts/hardware/linux.py`, where several threads in a `multiprocessing.pool.ThreadPool` call `run_process()` to collect block device information. When the race condition is hit, a `KeyError` is raised, breaking the mount point fact gathering and leaving the facts unavailable.
In our environment, when running against 1000+ CentOS 7 hosts, this race condition will be intermittently hit on only 1-2 hosts, making this difficult to troubleshoot. Because it's very hard to reproduce with fewer hosts, I created a script to demonstrate how the `os.environ` mutation is problematic: https://gist.github.com/agunnerson-ibm/b7a956d05095c5fad885295e2e31c9a4
Is it possible to refactor `run_command()` so that it makes a copy of `os.environ`, mutates the copy, and then passes it to `subprocess.Popen` via the `env=` kwarg? I believe that would fix this race condition and would remove the current implementation's need to back up and restore environment variables.
### Issue Type
Bug Report
### Component Name
module_utils/basic
### Ansible Version
```console
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/jenkins/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 29 2020, 17:03:31) [GCC 9.2.0]
jinja version = 2.11.1
libyaml = False
```
### Configuration
```console
(none)
```
### OS / Environment
Controller:
* Alpine 3.11.2
* Python 3.8
Target hosts:
* CentOS 7.9
* Python 2.7
### Steps to Reproduce
* Run an empty playbook (in debug mode) that only gathers facts against a very large number of hosts.
* Intermittently, on a small number of hosts, mount point fact gathering will fail due to `KeyError: 'LC_ALL'`
### Expected Results
Fact gathering should not fail due to multi-threaded `os.environ` mutation
### Actual Results
Mount point fact gathering fails and mount point facts are not available
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74783
|
https://github.com/ansible/ansible/pull/74791
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
|
98138584b7d0a3edd14a80650e2d169396fa51cc
| 2021-05-20T17:42:04Z |
python
| 2021-05-25T16:00:47Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import ansible.module_utils.compat.selinux as selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
env_fallback,
remove_values,
sanitize_keys,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY26 = (2, 6) == sys.version_info[:2]
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "ansible-core requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
if _PY26:
deprecate(
'ansible-core 2.13 will require Python 2.7 or newer on the target. '
'Current version: %s' % ''.join(sys.version.splitlines()),
version='2.13',
)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
# Save parameter values that should never be logged
self.no_log_values = set()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._load_params()
self._set_internal_properties()
self.validator = ModuleArgumentSpecValidator(self.argument_spec,
self.mutually_exclusive,
self.required_together,
self.required_one_of,
self.required_if,
self.required_by,
)
self.validation_result = self.validator.validate(self.params)
self.params.update(self.validation_result.validated_parameters)
self.no_log_values.update(self.validation_result._no_log_values)
try:
error = self.validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = self.validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg)
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not self.no_log:
self._log_invocation()
# selinux state caching
self._selinux_enabled = None
self._selinux_mls_enabled = None
self._selinux_initial_context = None
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if self._selinux_mls_enabled is None:
self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1
return self._selinux_mls_enabled
def selinux_enabled(self):
if self._selinux_enabled is None:
self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1
return self._selinux_enabled
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
if self._selinux_initial_context is None:
self._selinux_initial_context = [None, None, None]
if self.selinux_mls_enabled():
self._selinux_initial_context.append(None)
return self._selinux_initial_context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
'''
Takes a path and returns it's mount point
:param path: a string type with a filesystem path
:returns: the path to the mount point as a text type
'''
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
path_stat = os.lstat(b_path)
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# https://docs.python.org/3/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
name, value = (arg.upper(), str(log_args[arg]))
if name in (
'PRIORITY', 'MESSAGE', 'MESSAGE_ID',
'CODE_FILE', 'CODE_LINE', 'CODE_FUNC',
'SYSLOG_FACILITY', 'SYSLOG_IDENTIFIER',
'SYSLOG_PID',
):
name = "_%s" % name
journal_args.append((name, value))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp().
# Traceback would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
path = os.environ.get('PATH', '')
old_env_vals['PATH'] = path
if path:
os.environ['PATH'] = "%s:%s" % (path_prefix, path)
else:
os.environ['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,783 |
AnsibleModule.run_command intermittently raises KeyError due to thread-unsafe os.environ mutation on Python 2 targets
|
### Summary
Within `AnsibleModule.run_command()`, prior to calling `subprocess.Popen`, it uses `os.environ.get()` to read some environment variables:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L1947
After the `Popen` call, it deletes some environment variables that were added using `del os.environ[]`:
https://github.com/ansible/ansible/blob/v2.11.0/lib/ansible/module_utils/basic.py#L2091
Unfortunately, this is not thread-safe when Ansible hits Python 2 targets (eg. CentOS 7). The Python 2 standard library implements `.get()` non-atomically:
```python
def get(self, key, failobj=None):
if key not in self:
return failobj
return self[key]
```
This surfaces in Ansible primarily during the fact collection process in `module_utils/facts/hardware/linux.py`, where several threads in a `multiprocessing.pool.ThreadPool` call `run_process()` to collect block device information. When the race condition is hit, a `KeyError` is raised, breaking the mount point fact gathering and leaving the facts unavailable.
In our environment, when running against 1000+ CentOS 7 hosts, this race condition will be intermittently hit on only 1-2 hosts, making this difficult to troubleshoot. Because it's very hard to reproduce with fewer hosts, I created a script to demonstrate how the `os.environ` mutation is problematic: https://gist.github.com/agunnerson-ibm/b7a956d05095c5fad885295e2e31c9a4
Is it possible to refactor `run_command()` so that it makes a copy of `os.environ`, mutates the copy, and then passes it to `subprocess.Popen` via the `env=` kwarg? I believe that would fix this race condition and would remove the current implementation's need to back up and restore environment variables.
### Issue Type
Bug Report
### Component Name
module_utils/basic
### Ansible Version
```console
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/jenkins/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Feb 29 2020, 17:03:31) [GCC 9.2.0]
jinja version = 2.11.1
libyaml = False
```
### Configuration
```console
(none)
```
### OS / Environment
Controller:
* Alpine 3.11.2
* Python 3.8
Target hosts:
* CentOS 7.9
* Python 2.7
### Steps to Reproduce
* Run an empty playbook (in debug mode) that only gathers facts against a very large number of hosts.
* Intermittently, on a small number of hosts, mount point fact gathering will fail due to `KeyError: 'LC_ALL'`
### Expected Results
Fact gathering should not fail due to multi-threaded `os.environ` mutation
### Actual Results
Mount point fact gathering fails and mount point facts are not available
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74783
|
https://github.com/ansible/ansible/pull/74791
|
df5ce3e6720445bc567fd9bc41bf9f5114288369
|
98138584b7d0a3edd14a80650e2d169396fa51cc
| 2021-05-20T17:42:04Z |
python
| 2021-05-25T16:00:47Z |
test/units/module_utils/basic/test_run_command.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
from itertools import product
from io import BytesIO
import pytest
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY2
from ansible.module_utils.compat import selectors
class OpenBytesIO(BytesIO):
"""BytesIO with dummy close() method
So that you can inspect the content after close() was called.
"""
def close(self):
pass
@pytest.fixture
def mock_os(mocker):
def mock_os_chdir(path):
if path == '/inaccessible':
raise OSError(errno.EPERM, "Permission denied: '/inaccessible'")
def mock_os_abspath(path):
if path.startswith('/'):
return path
else:
return os.getcwd.return_value + '/' + path
os = mocker.patch('ansible.module_utils.basic.os')
os.path.expandvars.side_effect = lambda x: x
os.path.expanduser.side_effect = lambda x: x
os.environ = {'PATH': '/bin'}
os.getcwd.return_value = '/home/foo'
os.path.isdir.return_value = True
os.chdir.side_effect = mock_os_chdir
os.path.abspath.side_effect = mock_os_abspath
yield os
class DummyFileObj():
def __init__(self, fileobj):
self.fileobj = fileobj
class SpecialBytesIO(BytesIO):
def __init__(self, *args, **kwargs):
fh = kwargs.pop('fh', None)
super(SpecialBytesIO, self).__init__(*args, **kwargs)
self.fh = fh
def fileno(self):
return self.fh
# We need to do this because some of our tests create a new value for stdout and stderr
# The new value is able to affect the string that is returned by the subprocess stdout and
# stderr but by the time the test gets it, it is too late to change the SpecialBytesIO that
# subprocess.Popen returns for stdout and stderr. If we could figure out how to change those as
# well, then we wouldn't need this.
def __eq__(self, other):
if id(self) == id(other) or self.fh == other.fileno():
return True
return False
class DummyKey:
def __init__(self, fileobj):
self.fileobj = fileobj
@pytest.fixture
def mock_subprocess(mocker):
class MockSelector(selectors.BaseSelector):
def __init__(self):
super(MockSelector, self).__init__()
self._file_objs = []
def register(self, fileobj, events, data=None):
self._file_objs.append(fileobj)
def unregister(self, fileobj):
self._file_objs.remove(fileobj)
def select(self, timeout=None):
ready = []
for file_obj in self._file_objs:
ready.append((DummyKey(subprocess._output[file_obj.fileno()]), selectors.EVENT_READ))
return ready
def get_map(self):
return self._file_objs
def close(self):
super(MockSelector, self).close()
self._file_objs = []
selectors.DefaultSelector = MockSelector
subprocess = mocker.patch('ansible.module_utils.basic.subprocess')
subprocess._output = {mocker.sentinel.stdout: SpecialBytesIO(b'', fh=mocker.sentinel.stdout),
mocker.sentinel.stderr: SpecialBytesIO(b'', fh=mocker.sentinel.stderr)}
cmd = mocker.MagicMock()
cmd.returncode = 0
cmd.stdin = OpenBytesIO()
cmd.stdout = subprocess._output[mocker.sentinel.stdout]
cmd.stderr = subprocess._output[mocker.sentinel.stderr]
subprocess.Popen.return_value = cmd
yield subprocess
@pytest.fixture()
def rc_am(mocker, am, mock_os, mock_subprocess):
am.fail_json = mocker.MagicMock(side_effect=SystemExit)
am._os = mock_os
am._subprocess = mock_subprocess
yield am
class TestRunCommandArgs:
# Format is command as passed to run_command, command to Popen as list, command to Popen as string
ARGS_DATA = (
(['/bin/ls', 'a', 'b', 'c'], [b'/bin/ls', b'a', b'b', b'c'], b'/bin/ls a b c'),
('/bin/ls a " b" "c "', [b'/bin/ls', b'a', b' b', b'c '], b'/bin/ls a " b" "c "'),
)
# pylint bug: https://github.com/PyCQA/pylint/issues/511
# pylint: disable=undefined-variable
@pytest.mark.parametrize('cmd, expected, shell, stdin',
((arg, cmd_str if sh else cmd_lst, sh, {})
for (arg, cmd_lst, cmd_str), sh in product(ARGS_DATA, (True, False))),
indirect=['stdin'])
def test_args(self, cmd, expected, shell, rc_am):
rc_am.run_command(cmd, use_unsafe_shell=shell)
assert rc_am._subprocess.Popen.called
args, kwargs = rc_am._subprocess.Popen.call_args
assert args == (expected, )
assert kwargs['shell'] == shell
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_tuple_as_args(self, rc_am):
with pytest.raises(SystemExit):
rc_am.run_command(('ls', '/'))
assert rc_am.fail_json.called
class TestRunCommandCwd:
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_cwd(self, mocker, rc_am):
rc_am._os.getcwd.return_value = '/old'
rc_am.run_command('/bin/ls', cwd='/new')
assert rc_am._os.chdir.mock_calls == [mocker.call(b'/new'), mocker.call('/old'), ]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_cwd_relative_path(self, mocker, rc_am):
rc_am._os.getcwd.return_value = '/old'
rc_am.run_command('/bin/ls', cwd='sub-dir')
assert rc_am._os.chdir.mock_calls == [mocker.call(b'/old/sub-dir'), mocker.call('/old'), ]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_cwd_not_a_dir(self, mocker, rc_am):
rc_am._os.getcwd.return_value = '/old'
rc_am._os.path.isdir.side_effect = lambda d: d != '/not-a-dir'
rc_am.run_command('/bin/ls', cwd='/not-a-dir')
assert rc_am._os.chdir.mock_calls == [mocker.call('/old'), ]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_cwd_not_a_dir_noignore(self, rc_am):
rc_am._os.getcwd.return_value = '/old'
rc_am._os.path.isdir.side_effect = lambda d: d != '/not-a-dir'
with pytest.raises(SystemExit):
rc_am.run_command('/bin/ls', cwd='/not-a-dir', ignore_invalid_cwd=False)
assert rc_am.fail_json.called
class TestRunCommandPrompt:
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_prompt_bad_regex(self, rc_am):
with pytest.raises(SystemExit):
rc_am.run_command('foo', prompt_regex='[pP)assword:')
assert rc_am.fail_json.called
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_prompt_no_match(self, mocker, rc_am):
rc_am._os._cmd_out[mocker.sentinel.stdout] = BytesIO(b'hello')
(rc, _, _) = rc_am.run_command('foo', prompt_regex='[pP]assword:')
assert rc == 0
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_prompt_match_wo_data(self, mocker, rc_am):
rc_am._subprocess._output = {mocker.sentinel.stdout:
SpecialBytesIO(b'Authentication required!\nEnter password: ',
fh=mocker.sentinel.stdout),
mocker.sentinel.stderr:
SpecialBytesIO(b'', fh=mocker.sentinel.stderr)}
(rc, _, _) = rc_am.run_command('foo', prompt_regex=r'[pP]assword:', data=None)
assert rc == 257
class TestRunCommandRc:
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_check_rc_false(self, rc_am):
rc_am._subprocess.Popen.return_value.returncode = 1
(rc, _, _) = rc_am.run_command('/bin/false', check_rc=False)
assert rc == 1
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_check_rc_true(self, rc_am):
rc_am._subprocess.Popen.return_value.returncode = 1
with pytest.raises(SystemExit):
rc_am.run_command('/bin/false', check_rc=True)
assert rc_am.fail_json.called
args, kwargs = rc_am.fail_json.call_args
assert kwargs['rc'] == 1
class TestRunCommandOutput:
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_text_stdin(self, rc_am):
(rc, stdout, stderr) = rc_am.run_command('/bin/foo', data='hello world')
assert rc_am._subprocess.Popen.return_value.stdin.getvalue() == b'hello world\n'
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_ascii_stdout(self, mocker, rc_am):
rc_am._subprocess._output = {mocker.sentinel.stdout:
SpecialBytesIO(b'hello', fh=mocker.sentinel.stdout),
mocker.sentinel.stderr:
SpecialBytesIO(b'', fh=mocker.sentinel.stderr)}
(rc, stdout, stderr) = rc_am.run_command('/bin/cat hello.txt')
assert rc == 0
# module_utils function. On py3 it returns text and py2 it returns
# bytes because it's returning native strings
assert stdout == 'hello'
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_utf8_output(self, mocker, rc_am):
rc_am._subprocess._output = {mocker.sentinel.stdout:
SpecialBytesIO(u'Žarn§'.encode('utf-8'),
fh=mocker.sentinel.stdout),
mocker.sentinel.stderr:
SpecialBytesIO(u'لرئيسية'.encode('utf-8'),
fh=mocker.sentinel.stderr)}
(rc, stdout, stderr) = rc_am.run_command('/bin/something_ugly')
assert rc == 0
# module_utils function. On py3 it returns text and py2 it returns
# bytes because it's returning native strings
assert stdout == to_native(u'Žarn§')
assert stderr == to_native(u'لرئيسية')
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_run_command_fds(mocker, rc_am):
subprocess_mock = mocker.patch('ansible.module_utils.basic.subprocess')
subprocess_mock.Popen.side_effect = AssertionError
try:
rc_am.run_command('synchronize', pass_fds=(101, 42))
except SystemExit:
pass
if PY2:
assert subprocess_mock.Popen.call_args[1]['close_fds'] is False
assert 'pass_fds' not in subprocess_mock.Popen.call_args[1]
else:
assert subprocess_mock.Popen.call_args[1]['pass_fds'] == (101, 42)
assert subprocess_mock.Popen.call_args[1]['close_fds'] is True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
changelogs/fragments/74136-remove-playcontext-make-become-cmd.yml
|
bugfixes:
- PlayContext - Remove deprecated ``make_become_cmd`` (https://github.com/ansible/ansible/issues/74136)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
lib/ansible/playbook/play_context.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.compat.paramiko import paramiko
from ansible.module_utils.six import iteritems
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.plugins import get_plugin_class
from ansible.utils.display import Display
from ansible.plugins.loader import get_shell_plugin
from ansible.utils.ssh_functions import check_for_controlpersist
display = Display()
__all__ = ['PlayContext']
TASK_ATTRIBUTE_OVERRIDES = (
'become',
'become_user',
'become_pass',
'become_method',
'become_flags',
'connection',
'docker_extra_args', # TODO: remove
'delegate_to',
'no_log',
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_user',
'ansible_host',
'ansible_port',
# TODO: ???
'ansible_docker_extra_args',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_ssh_executable',
)
class PlayContext(Base):
'''
This class is used to consolidate the connection information for
hosts in a play and child tasks, where the task may override some
connection/authentication information.
'''
# base
_module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION)
_shell = FieldAttribute(isa='string')
_executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE)
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
_remote_addr = FieldAttribute(isa='string')
_password = FieldAttribute(isa='string')
_timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT)
_connection_user = FieldAttribute(isa='string')
_private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE)
_pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING)
# networking modules
_network_os = FieldAttribute(isa='string')
# docker FIXME: remove these
_docker_extra_args = FieldAttribute(isa='string')
# ???
_connection_lockfd = FieldAttribute(isa='int')
# privilege escalation fields
_become = FieldAttribute(isa='bool')
_become_method = FieldAttribute(isa='string')
_become_user = FieldAttribute(isa='string')
_become_pass = FieldAttribute(isa='string')
_become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE)
_become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS)
_prompt = FieldAttribute(isa='string')
# general flags
_verbosity = FieldAttribute(isa='int', default=0)
_only_tags = FieldAttribute(isa='set', default=set)
_skip_tags = FieldAttribute(isa='set', default=set)
_start_at_task = FieldAttribute(isa='string')
_step = FieldAttribute(isa='bool', default=False)
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
_force_handlers = FieldAttribute(isa='bool', default=False)
def __init__(self, play=None, passwords=None, connection_lockfd=None):
# Note: play is really not optional. The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
super(PlayContext, self).__init__()
if passwords is None:
passwords = {}
self.password = passwords.get('conn_pass', '')
self.become_pass = passwords.get('become_pass', '')
self._become_plugin = None
self.prompt = ''
self.success_key = ''
# a file descriptor to be used during locking operations
self.connection_lockfd = connection_lockfd
# set options before play to allow play to override them
if context.CLIARGS:
self.set_attributes_from_cli()
if play:
self.set_attributes_from_play(play)
def set_attributes_from_plugin(self, plugin):
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
options = C.config.get_configuration_definitions(get_plugin_class(plugin), plugin._load_name)
for option in options:
if option:
flag = options[option].get('name')
if flag:
setattr(self, flag, plugin.get_option(flag))
def set_attributes_from_play(self, play):
self.force_handlers = play.force_handlers
def set_attributes_from_cli(self):
'''
Configures this connection information instance with data from
options specified by the user on the command line. These have a
lower precedence than those set on the play or host.
'''
if context.CLIARGS.get('timeout', False):
self.timeout = int(context.CLIARGS['timeout'])
# From the command line. These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
self.private_key_file = context.CLIARGS.get('private_key_file') # Else default
self.verbosity = context.CLIARGS.get('verbosity') # Else default
# Not every cli that uses PlayContext has these command line args so have a default
self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default
def set_task_and_variable_override(self, task, variables, templar):
'''
Sets attributes from the task if they are set, which will override
those from the play.
:arg task: the task object with the parameters that were set on it
:arg variables: variables from inventory
:arg templar: templar instance if templating variables is needed
'''
new_info = self.copy()
# loop through a subset of attributes on the task object and set
# connection fields based on their values
for attr in TASK_ATTRIBUTE_OVERRIDES:
if hasattr(task, attr):
attr_val = getattr(task, attr)
if attr_val is not None:
setattr(new_info, attr, attr_val)
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
if task.delegate_to is not None:
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'):
if address_var in delegated_vars:
break
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars and delegated_vars[user_var]:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
# setup shell
for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'):
if exe_var in variables:
setattr(new_info, 'executable', variables.get(exe_var))
attrs_considered = []
for (attr, variable_names) in iteritems(C.MAGIC_VARIABLE_MAPPING):
for variable_name in variable_names:
if attr in attrs_considered:
continue
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
if task.delegate_to is not None:
if isinstance(delegated_vars, dict) and variable_name in delegated_vars:
setattr(new_info, attr, delegated_vars[variable_name])
attrs_considered.append(attr)
elif variable_name in variables:
setattr(new_info, attr, variables[variable_name])
attrs_considered.append(attr)
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'):
if become_pass_name in variables:
break
# make sure we get port defaults if needed
if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None:
new_info.port = int(C.DEFAULT_REMOTE_PORT)
# special overrides for the connection setting
if len(delegated_vars) > 0:
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if connection_type in delegated_vars:
break
else:
remote_addr_local = new_info.remote_addr in C.LOCALHOST
inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST
if remote_addr_local and inv_hostname_local:
setattr(new_info, 'connection', 'local')
elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local):
setattr(new_info, 'connection', C.DEFAULT_TRANSPORT)
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
if new_info.connection == 'local':
if not new_info.connection_user:
new_info.connection_user = new_info.remote_user
# set no_log to default if it was not previously set
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
if task.check_mode is not None:
new_info.check_mode = task.check_mode
if task.diff is not None:
new_info.diff = task.diff
return new_info
def set_become_plugin(self, plugin):
self._become_plugin = plugin
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
try:
if 'become' in prop:
continue
var_val = getattr(self, prop)
for var_opt in var_list:
if var_opt not in variables and var_val is not None:
variables[var_opt] = var_val
except AttributeError:
continue
def _get_attr_connection(self):
''' connections are special, this takes care of responding correctly '''
conn_type = None
if self._attributes['connection'] == 'smart':
conn_type = 'ssh'
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist('ssh') and paramiko is not None:
conn_type = "paramiko"
# if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems
elif self._attributes['connection'] == 'persistent' and paramiko is not None:
conn_type = 'paramiko'
if conn_type:
self.connection = conn_type
return self._attributes['connection']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py pylint:ansible-deprecated-version
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/compat/selinux.py import-2.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/plugins/action/__init__.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/async_status.py pylint:ansible-deprecated-version
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
test/units/playbook/test_play_context.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError
from ansible.playbook.play_context import PlayContext
from ansible.playbook.play import Play
from ansible.plugins.loader import become_loader
from ansible.utils import context_objects as co
@pytest.fixture
def parser():
parser = opt_help.create_base_parser('testparser')
opt_help.add_runas_options(parser)
opt_help.add_meta_options(parser)
opt_help.add_runtask_options(parser)
opt_help.add_vault_options(parser)
opt_help.add_async_options(parser)
opt_help.add_connect_options(parser)
opt_help.add_subset_options(parser)
opt_help.add_check_options(parser)
opt_help.add_inventory_options(parser)
return parser
@pytest.fixture
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
def test_play_context(mocker, parser, reset_cli_args):
options = parser.parse_args(['-vv', '--check'])
context._init_global_context(options)
play = Play.load({})
play_context = PlayContext(play=play)
assert play_context.remote_addr is None
assert play_context.remote_user is None
assert play_context.password == ''
assert play_context.private_key_file == C.DEFAULT_PRIVATE_KEY_FILE
assert play_context.timeout == C.DEFAULT_TIMEOUT
assert play_context.verbosity == 2
assert play_context.check_mode is True
mock_play = mocker.MagicMock()
mock_play.force_handlers = True
play_context = PlayContext(play=mock_play)
assert play_context.force_handlers is True
mock_task = mocker.MagicMock()
mock_task.connection = 'mocktask'
mock_task.remote_user = 'mocktask'
mock_task.port = 1234
mock_task.no_log = True
mock_task.become = True
mock_task.become_method = 'mocktask'
mock_task.become_user = 'mocktaskroot'
mock_task.become_pass = 'mocktaskpass'
mock_task._local_action = False
mock_task.delegate_to = None
all_vars = dict(
ansible_connection='mock_inventory',
ansible_ssh_port=4321,
)
mock_templar = mocker.MagicMock()
play_context = PlayContext()
play_context = play_context.set_task_and_variable_override(task=mock_task, variables=all_vars, templar=mock_templar)
assert play_context.connection == 'mock_inventory'
assert play_context.remote_user == 'mocktask'
assert play_context.no_log is True
mock_task.no_log = False
play_context = play_context.set_task_and_variable_override(task=mock_task, variables=all_vars, templar=mock_templar)
assert play_context.no_log is False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
test/units/plugins/become/test_su.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2020 Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible import context
from ansible.plugins.loader import become_loader, shell_loader
def test_su(mocker, parser, reset_cli_args):
options = parser.parse_args([])
context._init_global_context(options)
su = become_loader.get('su')
sh = shell_loader.get('sh')
sh.executable = "/bin/bash"
su.set_options(direct={
'become_user': 'foo',
'become_flags': '',
})
cmd = su.build_become_command('/bin/foo', sh)
assert re.match(r"""su\s+foo -c '/bin/bash -c '"'"'echo BECOME-SUCCESS-.+?; /bin/foo'"'"''""", cmd)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,136 |
play_context contains deprecated call to be removed in 2.12
|
##### SUMMARY
play_context contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/playbook/play_context.py:325:8: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/playbook/play_context.py
```
##### ANSIBLE VERSION
```
2.12
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/74136
|
https://github.com/ansible/ansible/pull/74790
|
98138584b7d0a3edd14a80650e2d169396fa51cc
|
ce556da7a0e00f9c57150f18f1797aa7b23da68d
| 2021-04-05T20:33:58Z |
python
| 2021-05-25T16:01:22Z |
test/units/plugins/become/test_sudo.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2020 Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible import context
from ansible.plugins.loader import become_loader, shell_loader
def test_sudo(mocker, parser, reset_cli_args):
options = parser.parse_args([])
context._init_global_context(options)
sudo = become_loader.get('sudo')
sh = shell_loader.get('sh')
sh.executable = "/bin/bash"
sudo.set_options(direct={
'become_user': 'foo',
'become_flags': '-n -s -H',
})
cmd = sudo.build_become_command('/bin/foo', sh)
assert re.match(r"""sudo\s+-n -s -H\s+-u foo /bin/bash -c 'echo BECOME-SUCCESS-.+? ; /bin/foo'""", cmd), cmd
sudo.set_options(direct={
'become_user': 'foo',
'become_flags': '-n -s -H',
'become_pass': 'testpass',
})
cmd = sudo.build_become_command('/bin/foo', sh)
assert re.match(r"""sudo\s+-s\s-H\s+-p "\[sudo via ansible, key=.+?\] password:" -u foo /bin/bash -c 'echo BECOME-SUCCESS-.+? ; /bin/foo'""", cmd), cmd
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,779 |
Support RedOs for hostname plugin
|
### Summary
Add RedOS support for hostname module plugin
### Issue Type
Feature Idea
### Component Name
hostname
### Additional Information
Just add class of redos for hostname
```
class RedosHostname(Hostname):
platform = 'Linux'
distribution = 'RED'
strategy_class = SystemdStrategy
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74779
|
https://github.com/ansible/ansible/pull/74844
|
138b3b6851f679b6b867ec3bc6835f4c65ae63ea
|
148d4f624864bb8c6dc5100cffdfd4e21f762885
| 2021-05-20T14:58:22Z |
python
| 2021-05-27T15:48:48Z |
changelogs/fragments/redos_hostname.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,779 |
Support RedOs for hostname plugin
|
### Summary
Add RedOS support for hostname module plugin
### Issue Type
Feature Idea
### Component Name
hostname
### Additional Information
Just add class of redos for hostname
```
class RedosHostname(Hostname):
platform = 'Linux'
distribution = 'RED'
strategy_class = SystemdStrategy
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74779
|
https://github.com/ansible/ansible/pull/74844
|
138b3b6851f679b6b867ec3bc6835f4c65ae63ea
|
148d4f624864bb8c6dc5100cffdfd4e21f762885
| 2021-05-20T14:58:22Z |
python
| 2021-05-27T15:48:48Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template)
or M(ansible.builtin.replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
- Supports C(check_mode).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
- Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
use: systemd
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.utils import get_file_lines
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Debian',
'freebsd': 'FreeBSD',
'generic': 'Generic',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class BaseStrategy(object):
def __init__(self, module):
self.module = module
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
return self.get_permanent_hostname()
def set_current_hostname(self, name):
pass
def get_permanent_hostname(self):
raise NotImplementedError
def set_permanent_hostname(self, name):
raise NotImplementedError
class CommandStrategy(BaseStrategy):
COMMAND = 'hostname'
def __init__(self, module):
super(CommandStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class FileStrategy(BaseStrategy):
FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
return get_file_lines(self.FILE)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
with open(self.FILE, 'w+') as f:
f.write("%s\n" % name)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class SLESStrategy(FileStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
FILE = '/etc/HOSTNAME'
class RedHatStrategy(BaseStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
for line in get_file_lines(self.NETWORK_FILE):
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
for line in get_file_lines(self.NETWORK_FILE):
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
if not found:
lines.append("HOSTNAME=%s\n" % name)
with open(self.NETWORK_FILE, 'w+') as f:
f.writelines(lines)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class AlpineStrategy(FileStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
FILE = '/etc/hostname'
COMMAND = 'hostname'
def set_current_hostname(self, name):
super(AlpineStrategy, self).set_current_hostname(name)
hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
cmd = [hostname_cmd, '-F', self.FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(BaseStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
COMMAND = "hostnamectl"
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostnamectl_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostnamectl_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(BaseStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class OpenBSDStrategy(FileStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
FILE = '/etc/myname'
class SolarisStrategy(BaseStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
COMMAND = "hostname"
def __init__(self, module):
super(SolarisStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(BaseStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
FILE = '/etc/rc.conf.d/hostname'
COMMAND = "hostname"
def __init__(self, module):
super(FreeBSDStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
if os.path.isfile(self.FILE):
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
else:
lines = ['hostname="%s"' % name]
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class DarwinStrategy(BaseStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class OpenSUSETumbleweedHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-tumbleweed'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class AlmaLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Almalinux'
strategy_class = SystemdStrategy
class ManjaroHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro'
strategy_class = SystemdStrategy
class ManjaroARMHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro-arm'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class AnolisOSHostname(Hostname):
platform = 'Linux'
distribution = 'Anolis'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class FlatcarHostname(Hostname):
platform = 'Linux'
distribution = 'Flatcar'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = FileStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = FileStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = FileStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = FileStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = FileStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = FileStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = FileStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = FileStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = FileStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = FileStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = FileStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class OsmcHostname(Hostname):
platform = 'Linux'
distribution = 'Osmc'
strategy_class = SystemdStrategy
class PardusHostname(Hostname):
platform = 'Linux'
distribution = 'Pardus'
strategy_class = SystemdStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = FileStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = FileStrategy
class RockyHostname(Hostname):
platform = 'Linux'
distribution = 'Rocky'
strategy_class = SystemdStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,774 |
error while installing packages in ec2 RHEL instance using yum module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I am trying to install a few packages in a fresh installed ec2 instance (Red Hat Image) using both the yum and package module but unable to install the packages as I am encountering the error everytime
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
module used : yum and package both tried
Task: Install packages
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/chuck/Desktop/Projects/Ansible-Kube/ansible.cfg
configured module search path = ['/home/chuck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
This is how I am connecting to the instance
hosts:
vars:
ansible_python_interpreter: /usr/bin/python2
remote_user: ec2-user
connection: ssh
become: yes
gather_facts: no
(python2 installed)
and tried installing packages using both package and yum module (in place of package moudle used yum also)
name: install packages needed
package:
name: "{{ item }}"
state: present
use_backend: yum
with_items:
- lvm2
- device-mapper
- curl
- device-mapper-persistent-data
- device-mapper-event
- device-mapper-libs
- device-mapper-event-libs
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should install all the neccessary packages
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Unsupported parameters for (dnf) module: use_backend Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, download_dir, download_only, enable_plugin, enablerepo, exclude, install_repoquery, install_weak_deps, installroot, list, lock_timeout, name, releasever, security, skip_broken, state, update_cache, update_only, validate_certs"
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70774
|
https://github.com/ansible/ansible/pull/70792
|
5640093f1ca63fd6af231cc8a7fb7d40e1907b8c
|
e0558ac1938e7df5e3070d01567ef80bc4046b1e
| 2020-07-21T07:10:20Z |
python
| 2021-05-31T04:18:15Z |
changelogs/fragments/70792-yum-action-plugin-use-as-alias-of-use-backend.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,774 |
error while installing packages in ec2 RHEL instance using yum module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I am trying to install a few packages in a fresh installed ec2 instance (Red Hat Image) using both the yum and package module but unable to install the packages as I am encountering the error everytime
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
module used : yum and package both tried
Task: Install packages
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/chuck/Desktop/Projects/Ansible-Kube/ansible.cfg
configured module search path = ['/home/chuck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
This is how I am connecting to the instance
hosts:
vars:
ansible_python_interpreter: /usr/bin/python2
remote_user: ec2-user
connection: ssh
become: yes
gather_facts: no
(python2 installed)
and tried installing packages using both package and yum module (in place of package moudle used yum also)
name: install packages needed
package:
name: "{{ item }}"
state: present
use_backend: yum
with_items:
- lvm2
- device-mapper
- curl
- device-mapper-persistent-data
- device-mapper-event
- device-mapper-libs
- device-mapper-event-libs
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should install all the neccessary packages
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Unsupported parameters for (dnf) module: use_backend Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, download_dir, download_only, enable_plugin, enablerepo, exclude, install_repoquery, install_weak_deps, installroot, list, lock_timeout, name, releasever, security, skip_broken, state, update_cache, update_only, validate_certs"
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70774
|
https://github.com/ansible/ansible/pull/70792
|
5640093f1ca63fd6af231cc8a7fb7d40e1907b8c
|
e0558ac1938e7df5e3070d01567ef80bc4046b1e
| 2020-07-21T07:10:20Z |
python
| 2021-05-31T04:18:15Z |
lib/ansible/plugins/action/yum.py
|
# (c) 2018, Ansible Project
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
VALID_BACKENDS = frozenset(('yum', 'yum4', 'dnf'))
class ActionModule(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
'''
Action plugin handler for yum3 vs yum4(dnf) operations.
Enables the yum module to use yum3 and/or yum4. Yum4 is a yum
command-line compatibility layer on top of dnf. Since the Ansible
modules for yum(aka yum3) and dnf(aka yum4) call each of yum3 and yum4's
python APIs natively on the backend, we need to handle this here and
pass off to the correct Ansible module to execute on the remote system.
'''
self._supports_check_mode = True
self._supports_async = True
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# Carry-over concept from the package action plugin
module = self._task.args.get('use_backend', "auto")
if module == 'auto':
try:
if self._task.delegate_to: # if we delegate, we should use delegated host's facts
module = self._templar.template("{{hostvars['%s']['ansible_facts']['pkg_mgr']}}" % self._task.delegate_to)
else:
module = self._templar.template("{{ansible_facts.pkg_mgr}}")
except Exception:
pass # could not get it from template!
if module not in VALID_BACKENDS:
facts = self._execute_module(
module_name="ansible.legacy.setup", module_args=dict(filter="ansible_pkg_mgr", gather_subset="!all"),
task_vars=task_vars)
display.debug("Facts %s" % facts)
module = facts.get("ansible_facts", {}).get("ansible_pkg_mgr", "auto")
if (not self._task.delegate_to or self._task.delegate_facts) and module != 'auto':
result['ansible_facts'] = {'pkg_mgr': module}
if module not in VALID_BACKENDS:
result.update(
{
'failed': True,
'msg': ("Could not detect which major revision of yum is in use, which is required to determine module backend.",
"You should manually specify use_backend to tell the module whether to use the yum (yum3) or dnf (yum4) backend})"),
}
)
else:
if module == "yum4":
module = "dnf"
# eliminate collisions with collections search while still allowing local override
module = 'ansible.legacy.' + module
if not self._shared_loader_obj.module_loader.has_plugin(module):
result.update({'failed': True, 'msg': "Could not find a yum module backend for %s." % module})
else:
# run either the yum (yum3) or dnf (yum4) backend module
new_module_args = self._task.args.copy()
if 'use_backend' in new_module_args:
del new_module_args['use_backend']
display.vvvv("Running %s as the backend for the yum action plugin" % module)
result.update(self._execute_module(
module_name=module, module_args=new_module_args, task_vars=task_vars, wrap_async=self._task.async_val))
# Cleanup
if not self._task.async_val:
# remove a temporary path we created
self._remove_tmp_path(self._connection._shell.tmpdir)
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,774 |
error while installing packages in ec2 RHEL instance using yum module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I am trying to install a few packages in a fresh installed ec2 instance (Red Hat Image) using both the yum and package module but unable to install the packages as I am encountering the error everytime
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
module used : yum and package both tried
Task: Install packages
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.7
config file = /home/chuck/Desktop/Projects/Ansible-Kube/ansible.cfg
configured module search path = ['/home/chuck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
This is how I am connecting to the instance
hosts:
vars:
ansible_python_interpreter: /usr/bin/python2
remote_user: ec2-user
connection: ssh
become: yes
gather_facts: no
(python2 installed)
and tried installing packages using both package and yum module (in place of package moudle used yum also)
name: install packages needed
package:
name: "{{ item }}"
state: present
use_backend: yum
with_items:
- lvm2
- device-mapper
- curl
- device-mapper-persistent-data
- device-mapper-event
- device-mapper-libs
- device-mapper-event-libs
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should install all the neccessary packages
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Unsupported parameters for (dnf) module: use_backend Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, download_dir, download_only, enable_plugin, enablerepo, exclude, install_repoquery, install_weak_deps, installroot, list, lock_timeout, name, releasever, security, skip_broken, state, update_cache, update_only, validate_certs"
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70774
|
https://github.com/ansible/ansible/pull/70792
|
5640093f1ca63fd6af231cc8a7fb7d40e1907b8c
|
e0558ac1938e7df5e3070d01567ef80bc4046b1e
| 2020-07-21T07:10:20Z |
python
| 2021-05-31T04:18:15Z |
test/integration/targets/package/tasks/main.yml
|
# Test code for the package module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact: output_dir_test={{output_dir}}/at
- name: make sure our testing sub-directory does not exist
file: path="{{ output_dir_test }}" state=absent
- name: create our testing sub-directory
file: path="{{ output_dir_test }}" state=directory
# Verify correct default package manager for Fedora
# Validates: https://github.com/ansible/ansible/issues/34014
- block:
- name: install apt
dnf:
name: apt
state: present
- name: gather facts again
setup:
- name: validate output
assert:
that:
- 'ansible_pkg_mgr == "dnf"'
always:
- name: remove apt
dnf:
name: apt
state: absent
- name: gather facts again
setup:
when: ansible_distribution == "Fedora"
# Verify correct default package manager for Debian/Ubuntu when Zypper installed
- block:
# Just make an executable file called "zypper" - installing zypper itself
# consistently is hard - and we're not going to use it
- name: install fake zypper
file:
state: touch
mode: 0755
path: /usr/bin/zypper
- name: gather facts again
setup:
- name: validate output
assert:
that:
- 'ansible_pkg_mgr == "apt"'
always:
- name: remove fake zypper
file:
path: /usr/bin/zypper
state: absent
- name: gather facts again
setup:
when: ansible_os_family == "Debian"
##
## package
##
# Verify module_defaults for package and the underlying module are utilized
# Validates: https://github.com/ansible/ansible/issues/72918
- block:
# 'name' is required
- name: install apt with package defaults
package:
module_defaults:
package:
name: apt
state: present
- name: install apt with dnf defaults (auto)
package:
module_defaults:
dnf:
name: apt
state: present
- name: install apt with dnf defaults (use dnf)
package:
use: dnf
module_defaults:
dnf:
name: apt
state: present
always:
- name: remove apt
dnf:
name: apt
state: absent
when: ansible_distribution == "Fedora"
- name: define distros to attempt installing at on
set_fact:
package_distros:
- RedHat
- CentOS
- ScientificLinux
- Fedora
- Ubuntu
- Debian
- block:
- name: remove at package
package:
name: at
state: absent
register: at_check0
- name: verify at command is missing
shell: which at
register: at_check1
failed_when: at_check1.rc == 0
- name: reinstall at package
package:
name: at
state: present
register: at_install0
- debug: var=at_install0
- name: validate results
assert:
that:
- 'at_install0.changed is defined'
- 'at_install0.changed'
- name: verify at command is installed
shell: which at
when: ansible_distribution in package_distros
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,864 |
display_failed_stderr not working when on tasks with loops
|
### Summary
Hi
I have noticed that the **display_failed_stderr** in setting just send task errors to stderr when the the task which is failing does not have a loop. If the task which fails has a loop then the the output is never sent to stderr, even if all items in the loop fail.
This prevents effective filtering of errors by greping stderr, given that not all failed tasks are sent to it.
Thanks!
### Issue Type
Bug Report
### Component Name
community.general collection
### Ansible Version
```console
ansible [core 2.11.0]
config file = /Users/miqueladrover/.ansible.cfg
configured module search path = ['/Users/miqueladrover/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/miqueladrover/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
ANSIBLE_PIPELINING(/Users/miqueladrover/.ansible.cfg) = True
DEFAULT_FORKS(/Users/miqueladrover/.ansible.cfg) = 50
DEFAULT_STRATEGY_PLUGIN_PATH(/Users/miqueladrover/.ansible.cfg) = ['/usr/local/lib/python3.7/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(/Users/miqueladrover/.ansible.cfg) = False
```
### OS / Environment
OSX, Centos 7
### Steps to Reproduce
In the following example playbook you would expect to see four tasks to be sent to stderr, but only the first is really sent to it.
```
---
- hosts: localhost
any_errors_fatal: true
gather_facts: false
tasks:
- command: 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'true'
- 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'false'
- 'false'
```
### Expected Results
```
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
### Actual Results
The non-filtered output is:
```console
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [command] *****************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006232", "end": "2021-05-31 11:14:48.623654", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:48.617422", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=true)
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************************************
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=2
```
and redirecting stdout to /dev/null:
```
ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74864
|
https://github.com/ansible/ansible/pull/74865
|
70f22c7f32d984dbd4eb2e62100353af022f6f4f
|
01ab6c6ec73915412f1a1d00f7c247147ccfd606
| 2021-05-31T09:53:05Z |
python
| 2021-06-01T15:31:46Z |
changelogs/fragments/74864-display_failed_stderr-per-item.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,864 |
display_failed_stderr not working when on tasks with loops
|
### Summary
Hi
I have noticed that the **display_failed_stderr** in setting just send task errors to stderr when the the task which is failing does not have a loop. If the task which fails has a loop then the the output is never sent to stderr, even if all items in the loop fail.
This prevents effective filtering of errors by greping stderr, given that not all failed tasks are sent to it.
Thanks!
### Issue Type
Bug Report
### Component Name
community.general collection
### Ansible Version
```console
ansible [core 2.11.0]
config file = /Users/miqueladrover/.ansible.cfg
configured module search path = ['/Users/miqueladrover/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/miqueladrover/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
ANSIBLE_PIPELINING(/Users/miqueladrover/.ansible.cfg) = True
DEFAULT_FORKS(/Users/miqueladrover/.ansible.cfg) = 50
DEFAULT_STRATEGY_PLUGIN_PATH(/Users/miqueladrover/.ansible.cfg) = ['/usr/local/lib/python3.7/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(/Users/miqueladrover/.ansible.cfg) = False
```
### OS / Environment
OSX, Centos 7
### Steps to Reproduce
In the following example playbook you would expect to see four tasks to be sent to stderr, but only the first is really sent to it.
```
---
- hosts: localhost
any_errors_fatal: true
gather_facts: false
tasks:
- command: 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'true'
- 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'false'
- 'false'
```
### Expected Results
```
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
### Actual Results
The non-filtered output is:
```console
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [command] *****************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006232", "end": "2021-05-31 11:14:48.623654", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:48.617422", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=true)
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************************************
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=2
```
and redirecting stdout to /dev/null:
```
ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74864
|
https://github.com/ansible/ansible/pull/74865
|
70f22c7f32d984dbd4eb2e62100353af022f6f4f
|
01ab6c6ec73915412f1a1d00f7c247147ccfd606
| 2021-05-31T09:53:05Z |
python
| 2021-06-01T15:31:46Z |
lib/ansible/plugins/callback/default.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: default
type: stdout
short_description: default Ansible screen output
version_added: historical
description:
- This is the default output callback for ansible-playbook.
extends_documentation_fragment:
- default_callback
requirements:
- set as stdout in configuration
'''
from ansible import constants as C
from ansible import context
from ansible.playbook.task_include import TaskInclude
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
# These values use ansible.constants for historical reasons, mostly to allow
# unmodified derivative plugins to work. However, newer options added to the
# plugin are not also added to ansible.constants, so authors of derivative
# callback plugins will eventually need to add a reference to the common docs
# fragment for the 'default' callback plugin
# these are used to provide backwards compat with old plugins that subclass from default
# but still don't use the new config system and/or fail to document the options
# TODO: Change the default of check_mode_markers to True in a future release (2.13)
COMPAT_OPTIONS = (('display_skipped_hosts', C.DISPLAY_SKIPPED_HOSTS),
('display_ok_hosts', True),
('show_custom_stats', C.SHOW_CUSTOM_STATS),
('display_failed_stderr', False),
('check_mode_markers', False),
('show_per_host_start', False))
class CallbackModule(CallbackBase):
'''
This is the default callback interface, which simply prints messages
to stdout when new callback events are received.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
def __init__(self):
self._play = None
self._last_task_banner = None
self._last_task_name = None
self._task_type_cache = {}
super(CallbackModule, self).__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# for backwards compat with plugins subclassing default, fallback to constants
for option, constant in COMPAT_OPTIONS:
try:
value = self.get_option(option)
except (AttributeError, KeyError):
self._display.deprecated("'%s' is subclassing DefaultCallback without the corresponding doc_fragment." % self._load_name,
version='2.14', collection_name='ansible.builtin')
value = constant
setattr(self, option, value)
def v2_runner_on_failed(self, result, ignore_errors=False):
host_label = self.host_label(result)
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._handle_exception(result._result, use_stderr=self.display_failed_stderr)
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
if self._display.verbosity < 2 and self.get_option('show_task_path_on_failure'):
self._print_task_path(result._task)
msg = "fatal: [%s]: FAILED! => %s" % (host_label, self._dump_results(result._result))
self._display.display(msg, color=C.COLOR_ERROR, stderr=self.display_failed_stderr)
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
def v2_runner_on_ok(self, result):
host_label = self.host_label(result)
if isinstance(result._task, TaskInclude):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = "changed: [%s]" % (host_label,)
color = C.COLOR_CHANGED
else:
if not self.display_ok_hosts:
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = "ok: [%s]" % (host_label,)
color = C.COLOR_OK
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % (self._dump_results(result._result),)
self._display.display(msg, color=color)
def v2_runner_on_skipped(self, result):
if self.display_skipped_hosts:
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
msg = "skipping: [%s]" % result._host.get_name()
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_runner_on_unreachable(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
host_label = self.host_label(result)
msg = "fatal: [%s]: UNREACHABLE! => %s" % (host_label, self._dump_results(result._result))
self._display.display(msg, color=C.COLOR_UNREACHABLE, stderr=self.display_failed_stderr)
def v2_playbook_on_no_hosts_matched(self):
self._display.display("skipping: no hosts matched", color=C.COLOR_SKIP)
def v2_playbook_on_no_hosts_remaining(self):
self._display.banner("NO MORE HOSTS LEFT")
def v2_playbook_on_task_start(self, task, is_conditional):
self._task_start(task, prefix='TASK')
def _task_start(self, task, prefix=None):
# Cache output prefix for task if provided
# This is needed to properly display 'RUNNING HANDLER' and similar
# when hiding skipped/ok task results
if prefix is not None:
self._task_type_cache[task._uuid] = prefix
# Preserve task name, as all vars may not be available for templating
# when we need it later
if self._play.strategy in ('free', 'host_pinned'):
# Explicitly set to None for strategy free/host_pinned to account for any cached
# task title from a previous non-free play
self._last_task_name = None
else:
self._last_task_name = task.get_name().strip()
# Display the task banner immediately if we're not doing any filtering based on task result
if self.display_skipped_hosts and self.display_ok_hosts:
self._print_task_banner(task)
def _print_task_banner(self, task):
# args can be specified as no_log in several places: in the task or in
# the argument spec. We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
# machine and we haven't run it thereyet at this time.
#
# So we give people a config option to affect display of the args so
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
args = ''
if not task.no_log and C.DISPLAY_ARGS_TO_STDOUT:
args = u', '.join(u'%s=%s' % a for a in task.args.items())
args = u' %s' % args
prefix = self._task_type_cache.get(task._uuid, 'TASK')
# Use cached task name
task_name = self._last_task_name
if task_name is None:
task_name = task.get_name().strip()
if task.check_mode and self.check_mode_markers:
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
self._display.banner(u"%s [%s%s]%s" % (prefix, task_name, args, checkmsg))
if self._display.verbosity >= 2:
self._print_task_path(task)
self._last_task_banner = task._uuid
def v2_playbook_on_cleanup_task_start(self, task):
self._task_start(task, prefix='CLEANUP TASK')
def v2_playbook_on_handler_task_start(self, task):
self._task_start(task, prefix='RUNNING HANDLER')
def v2_runner_on_start(self, host, task):
if self.get_option('show_per_host_start'):
self._display.display(" [started %s on %s]" % (task, host), color=C.COLOR_OK)
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if play.check_mode and self.check_mode_markers:
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
if not name:
msg = u"PLAY%s" % checkmsg
else:
msg = u"PLAY [%s]%s" % (name, checkmsg)
self._play = play
self._display.banner(msg)
def v2_on_file_diff(self, result):
if result._task.loop and 'results' in result._result:
for res in result._result['results']:
if 'diff' in res and res['diff'] and res.get('changed', False):
diff = self._get_diff(res['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
diff = self._get_diff(result._result['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
def v2_runner_item_on_ok(self, result):
host_label = self.host_label(result)
if isinstance(result._task, TaskInclude):
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'changed'
color = C.COLOR_CHANGED
else:
if not self.display_ok_hosts:
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'ok'
color = C.COLOR_OK
msg = "%s: [%s] => (item=%s)" % (msg, host_label, self._get_item_label(result._result))
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=color)
def v2_runner_item_on_failed(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
host_label = self.host_label(result)
self._clean_results(result._result, result._task.action)
self._handle_exception(result._result)
msg = "failed: [%s]" % (host_label,)
self._handle_warnings(result._result)
self._display.display(msg + " (item=%s) => %s" % (self._get_item_label(result._result), self._dump_results(result._result)), color=C.COLOR_ERROR)
def v2_runner_item_on_skipped(self, result):
if self.display_skipped_hosts:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._clean_results(result._result, result._task.action)
msg = "skipping: [%s] => (item=%s) " % (result._host.get_name(), self._get_item_label(result._result))
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_include(self, included_file):
msg = 'included: %s for %s' % (included_file._filename, ", ".join([h.name for h in included_file._hosts]))
label = self._get_item_label(included_file._vars)
if label:
msg += " => (item=%s)" % label
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_stats(self, stats):
self._display.banner("PLAY RECAP")
hosts = sorted(stats.processed.keys())
for h in hosts:
t = stats.summarize(h)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], C.COLOR_OK),
colorize(u'changed', t['changed'], C.COLOR_CHANGED),
colorize(u'unreachable', t['unreachable'], C.COLOR_UNREACHABLE),
colorize(u'failed', t['failures'], C.COLOR_ERROR),
colorize(u'skipped', t['skipped'], C.COLOR_SKIP),
colorize(u'rescued', t['rescued'], C.COLOR_OK),
colorize(u'ignored', t['ignored'], C.COLOR_WARN),
),
screen_only=True
)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None),
colorize(u'skipped', t['skipped'], None),
colorize(u'rescued', t['rescued'], None),
colorize(u'ignored', t['ignored'], None),
),
log_only=True
)
self._display.display("", screen_only=True)
# print custom stats if required
if stats.custom and self.show_custom_stats:
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == '_run':
continue
self._display.display('\t%s: %s' % (k, self._dump_results(stats.custom[k], indent=1).replace('\n', '')))
# print per run custom stats
if '_run' in stats.custom:
self._display.display("", screen_only=True)
self._display.display('\tRUN: %s' % self._dump_results(stats.custom['_run'], indent=1).replace('\n', ''))
self._display.display("", screen_only=True)
if context.CLIARGS['check'] and self.check_mode_markers:
self._display.banner("DRY RUN")
def v2_playbook_on_start(self, playbook):
if self._display.verbosity > 1:
from os.path import basename
self._display.banner("PLAYBOOK: %s" % basename(playbook._file_name))
# show CLI arguments
if self._display.verbosity > 3:
if context.CLIARGS.get('args'):
self._display.display('Positional arguments: %s' % ' '.join(context.CLIARGS['args']),
color=C.COLOR_VERBOSE, screen_only=True)
for argument in (a for a in context.CLIARGS if a != 'args'):
val = context.CLIARGS[argument]
if val:
self._display.display('%s: %s' % (argument, val), color=C.COLOR_VERBOSE, screen_only=True)
if context.CLIARGS['check'] and self.check_mode_markers:
self._display.banner("DRY RUN")
def v2_runner_retry(self, result):
task_name = result.task_name or result._task
msg = "FAILED - RETRYING: %s (%d retries left)." % (task_name, result._result['retries'] - result._result['attempts'])
if self._run_is_verbose(result, verbosity=2):
msg += "Result was: %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_DEBUG)
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
started = result._result.get('started')
finished = result._result.get('finished')
self._display.display(
'ASYNC POLL on %s: jid=%s started=%s finished=%s' % (host, jid, started, finished),
color=C.COLOR_DEBUG
)
def v2_playbook_on_notify(self, handler, host):
if self._display.verbosity > 1:
self._display.display("NOTIFIED HANDLER %s for %s" % (handler.get_name(), host), color=C.COLOR_VERBOSE, screen_only=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,864 |
display_failed_stderr not working when on tasks with loops
|
### Summary
Hi
I have noticed that the **display_failed_stderr** in setting just send task errors to stderr when the the task which is failing does not have a loop. If the task which fails has a loop then the the output is never sent to stderr, even if all items in the loop fail.
This prevents effective filtering of errors by greping stderr, given that not all failed tasks are sent to it.
Thanks!
### Issue Type
Bug Report
### Component Name
community.general collection
### Ansible Version
```console
ansible [core 2.11.0]
config file = /Users/miqueladrover/.ansible.cfg
configured module search path = ['/Users/miqueladrover/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/miqueladrover/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
ANSIBLE_PIPELINING(/Users/miqueladrover/.ansible.cfg) = True
DEFAULT_FORKS(/Users/miqueladrover/.ansible.cfg) = 50
DEFAULT_STRATEGY_PLUGIN_PATH(/Users/miqueladrover/.ansible.cfg) = ['/usr/local/lib/python3.7/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(/Users/miqueladrover/.ansible.cfg) = False
```
### OS / Environment
OSX, Centos 7
### Steps to Reproduce
In the following example playbook you would expect to see four tasks to be sent to stderr, but only the first is really sent to it.
```
---
- hosts: localhost
any_errors_fatal: true
gather_facts: false
tasks:
- command: 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'true'
- 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'false'
- 'false'
```
### Expected Results
```
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
### Actual Results
The non-filtered output is:
```console
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [command] *****************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006232", "end": "2021-05-31 11:14:48.623654", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:48.617422", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=true)
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************************************
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=2
```
and redirecting stdout to /dev/null:
```
ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74864
|
https://github.com/ansible/ansible/pull/74865
|
70f22c7f32d984dbd4eb2e62100353af022f6f4f
|
01ab6c6ec73915412f1a1d00f7c247147ccfd606
| 2021-05-31T09:53:05Z |
python
| 2021-06-01T15:31:46Z |
test/integration/targets/callback_default/callback_default.out.failed_to_stderr.stderr
|
+ ansible-playbook -i inventory test.yml
++ set +x
fatal: [testhost]: FAILED! => {"changed": false, "msg": "no reason"}
fatal: [testhost]: FAILED! => {"msg": "One or more items failed"}
fatal: [testhost]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,864 |
display_failed_stderr not working when on tasks with loops
|
### Summary
Hi
I have noticed that the **display_failed_stderr** in setting just send task errors to stderr when the the task which is failing does not have a loop. If the task which fails has a loop then the the output is never sent to stderr, even if all items in the loop fail.
This prevents effective filtering of errors by greping stderr, given that not all failed tasks are sent to it.
Thanks!
### Issue Type
Bug Report
### Component Name
community.general collection
### Ansible Version
```console
ansible [core 2.11.0]
config file = /Users/miqueladrover/.ansible.cfg
configured module search path = ['/Users/miqueladrover/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/miqueladrover/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
ANSIBLE_PIPELINING(/Users/miqueladrover/.ansible.cfg) = True
DEFAULT_FORKS(/Users/miqueladrover/.ansible.cfg) = 50
DEFAULT_STRATEGY_PLUGIN_PATH(/Users/miqueladrover/.ansible.cfg) = ['/usr/local/lib/python3.7/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(/Users/miqueladrover/.ansible.cfg) = False
```
### OS / Environment
OSX, Centos 7
### Steps to Reproduce
In the following example playbook you would expect to see four tasks to be sent to stderr, but only the first is really sent to it.
```
---
- hosts: localhost
any_errors_fatal: true
gather_facts: false
tasks:
- command: 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'true'
- 'false'
ignore_errors: true
- command: "{{ item }}"
loop:
- 'false'
- 'false'
```
### Expected Results
```
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
### Actual Results
The non-filtered output is:
```console
$ ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [command] *****************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006232", "end": "2021-05-31 11:14:48.623654", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:48.617422", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=true)
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006255", "end": "2021-05-31 11:14:49.104375", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.098120", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [command] *****************************************************************************************************************************************************************************************************************************
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.006004", "end": "2021-05-31 11:14:49.354347", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.348343", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=false) => {"ansible_loop_var": "item", "changed": true, "cmd": ["false"], "delta": "0:00:00.005797", "end": "2021-05-31 11:14:49.571522", "item": "false", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:49.565725", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************************************
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=2
```
and redirecting stdout to /dev/null:
```
ANSIBLE_DISPLAY_FAILED_STDERR=true ansible-playbook display_failed_stderr.yml > /dev/null
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.006033", "end": "2021-05-31 11:14:56.067248", "msg": "non-zero return code", "rc": 1, "start": "2021-05-31 11:14:56.061215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74864
|
https://github.com/ansible/ansible/pull/74865
|
70f22c7f32d984dbd4eb2e62100353af022f6f4f
|
01ab6c6ec73915412f1a1d00f7c247147ccfd606
| 2021-05-31T09:53:05Z |
python
| 2021-06-01T15:31:46Z |
test/integration/targets/callback_default/callback_default.out.failed_to_stderr.stdout
|
PLAY [testhost] ****************************************************************
TASK [Changed task] ************************************************************
changed: [testhost]
TASK [Ok task] *****************************************************************
ok: [testhost]
TASK [Failed task] *************************************************************
...ignoring
TASK [Skipped task] ************************************************************
skipping: [testhost]
TASK [Task with var in name (foo bar)] *****************************************
changed: [testhost]
TASK [Loop task] ***************************************************************
changed: [testhost] => (item=foo-1)
changed: [testhost] => (item=foo-2)
changed: [testhost] => (item=foo-3)
TASK [debug loop] **************************************************************
changed: [testhost] => (item=debug-1) => {
"msg": "debug-1"
}
failed: [testhost] (item=debug-2) => {
"msg": "debug-2"
}
ok: [testhost] => (item=debug-3) => {
"msg": "debug-3"
}
skipping: [testhost] => (item=debug-4)
...ignoring
TASK [EXPECTED FAILURE Failed task to be rescued] ******************************
TASK [Rescue task] *************************************************************
changed: [testhost]
TASK [include_tasks] ***********************************************************
included: .../test/integration/targets/callback_default/include_me.yml for testhost => (item=1)
TASK [debug] *******************************************************************
ok: [testhost] => {
"item": 1
}
TASK [copy] ********************************************************************
changed: [testhost]
TASK [replace] *****************************************************************
--- before: .../test_diff.txt
+++ after: .../test_diff.txt
@@ -1 +1 @@
-foo
\ No newline at end of file
+bar
\ No newline at end of file
changed: [testhost]
TASK [replace] *****************************************************************
ok: [testhost]
RUNNING HANDLER [Test handler 1] ***********************************************
changed: [testhost]
RUNNING HANDLER [Test handler 2] ***********************************************
ok: [testhost]
RUNNING HANDLER [Test handler 3] ***********************************************
changed: [testhost]
PLAY [testhost] ****************************************************************
TASK [First free task] *********************************************************
changed: [testhost]
TASK [Second free task] ********************************************************
changed: [testhost]
TASK [Include some tasks] ******************************************************
included: .../test/integration/targets/callback_default/include_me.yml for testhost => (item=1)
TASK [debug] *******************************************************************
ok: [testhost] => {
"item": 1
}
PLAY RECAP *********************************************************************
testhost : ok=19 changed=11 unreachable=0 failed=0 skipped=1 rescued=1 ignored=2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,238 |
get_option method for Action Plugin
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The documentation (see https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#plugin-configuration-documentation-standards) says: "To access the configuration settings in your plugin, use self.get_option(<option_name>). "
But there is no "get_option" method in "ActionBase"
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
Action plugins
##### ANSIBLE VERSION
ansible 2.10.2
config file = /home/adm-dfournout/.ansible.cfg
configured module search path = ['/home/adm-dfournout/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adm-dfournout/.local/lib/python3.7/site-packages/ansible
executable location = /home/adm-dfournout/.local/bin/ansible
python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72238
|
https://github.com/ansible/ansible/pull/74799
|
adc9e4a0239b60eaf8feaed4ff288b086acd5f74
|
8d39332c3dbe21363a7f6779584495265c585d72
| 2020-10-16T15:57:50Z |
python
| 2021-06-01T17:46:04Z |
docs/docsite/rst/dev_guide/developing_plugins.rst
|
.. _developing_plugins:
.. _plugin_guidelines:
******************
Developing plugins
******************
.. contents::
:local:
Plugins augment Ansible's core functionality with logic and features that are accessible to all modules. Ansible collections include a number of handy plugins, and you can easily write your own. All plugins must:
* be written in Python
* raise errors
* return strings in unicode
* conform to Ansible's configuration and documentation standards
Once you've reviewed these general guidelines, you can skip to the particular type of plugin you want to develop.
Writing plugins in Python
=========================
You must write your plugin in Python so it can be loaded by the ``PluginLoader`` and returned as a Python object that any module can use. Since your plugin will execute on the controller, you must write it in a :ref:`compatible version of Python <control_node_requirements>`.
Raising errors
==============
You should return errors encountered during plugin execution by raising ``AnsibleError()`` or a similar class with a message describing the error. When wrapping other exceptions into error messages, you should always use the ``to_native`` Ansible function to ensure proper string compatibility across Python versions:
.. code-block:: python
from ansible.module_utils.common.text.converters import to_native
try:
cause_an_exception()
except Exception as e:
raise AnsibleError('Something happened, this was original exception: %s' % to_native(e))
Check the different `AnsibleError objects <https://github.com/ansible/ansible/blob/devel/lib/ansible/errors/__init__.py>`_ and see which one applies best to your situation.
String encoding
===============
You must convert any strings returned by your plugin into Python's unicode type. Converting to unicode ensures that these strings can run through Jinja2. To convert strings:
.. code-block:: python
from ansible.module_utils.common.text.converters import to_text
result_string = to_text(result_string)
Plugin configuration & documentation standards
==============================================
To define configurable options for your plugin, describe them in the ``DOCUMENTATION`` section of the python file. Callback and connection plugins have declared configuration requirements this way since Ansible version 2.4; most plugin types now do the same. This approach ensures that the documentation of your plugin's options will always be correct and up-to-date. To add a configurable option to your plugin, define it in this format:
.. code-block:: yaml
options:
option_name:
description: describe this config option
default: default value for this config option
env:
- name: NAME_OF_ENV_VAR
ini:
- section: section_of_ansible.cfg_where_this_config_option_is_defined
key: key_used_in_ansible.cfg
required: True/False
type: boolean/float/integer/list/none/path/pathlist/pathspec/string/tmppath
version_added: X.x
To access the configuration settings in your plugin, use ``self.get_option(<option_name>)``. For most plugin types, the controller pre-populates the settings. If you need to populate settings explicitly, use a ``self.set_options()`` call.
Plugins that support embedded documentation (see :ref:`ansible-doc` for the list) should include well-formed doc strings. If you inherit from a plugin, you must document the options it takes, either via a documentation fragment or as a copy. See :ref:`module_documenting` for more information on correct documentation. Thorough documentation is a good idea even if you're developing a plugin for local use.
Developing particular plugin types
==================================
.. _developing_actions:
Action plugins
--------------
Action plugins let you integrate local processing and local data with module functionality.
To create an action plugin, create a new class with the Base(ActionBase) class as the parent:
.. code-block:: python
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
pass
From there, execute the module using the ``_execute_module`` method to call the original module.
After successful execution of the module, you can modify the module return data.
.. code-block:: python
module_return = self._execute_module(module_name='<NAME_OF_MODULE>',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
For example, if you wanted to check the time difference between your Ansible controller and your target machine(s), you could write an action plugin to check the local time and compare it to the return data from Ansible's ``setup`` module:
.. code-block:: python
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from datetime import datetime
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
module_return = self._execute_module(module_name='setup',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
ret = dict()
remote_date = None
if not module_return.get('failed'):
for key, value in module_return['ansible_facts'].items():
if key == 'ansible_date_time':
remote_date = value['iso8601']
if remote_date:
remote_date_obj = datetime.strptime(remote_date, '%Y-%m-%dT%H:%M:%SZ')
time_delta = datetime.utcnow() - remote_date_obj
ret['delta_seconds'] = time_delta.seconds
ret['delta_days'] = time_delta.days
ret['delta_microseconds'] = time_delta.microseconds
return dict(ansible_facts=dict(ret))
This code checks the time on the controller, captures the date and time for the remote machine using the ``setup`` module, and calculates the difference between the captured time and
the local time, returning the time delta in days, seconds and microseconds.
For practical examples of action plugins,
see the source code for the `action plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/action>`_
.. _developing_cache_plugins:
Cache plugins
-------------
Cache plugins store gathered facts and data retrieved by inventory plugins.
Import cache plugins using the cache_loader so you can use ``self.set_options()`` and ``self.get_option(<option_name>)``. If you import a cache plugin directly in the code base, you can only access options via ``ansible.constants``, and you break the cache plugin's ability to be used by an inventory plugin.
.. code-block:: python
from ansible.plugins.loader import cache_loader
[...]
plugin = cache_loader.get('custom_cache', **cache_kwargs)
There are two base classes for cache plugins, ``BaseCacheModule`` for database-backed caches, and ``BaseCacheFileModule`` for file-backed caches.
To create a cache plugin, start by creating a new ``CacheModule`` class with the appropriate base class. If you're creating a plugin using an ``__init__`` method you should initialize the base class with any provided args and kwargs to be compatible with inventory plugin cache options. The base class calls ``self.set_options(direct=kwargs)``. After the base class ``__init__`` method is called ``self.get_option(<option_name>)`` should be used to access cache options.
New cache plugins should take the options ``_uri``, ``_prefix``, and ``_timeout`` to be consistent with existing cache plugins.
.. code-block:: python
from ansible.plugins.cache import BaseCacheModule
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
super(CacheModule, self).__init__(*args, **kwargs)
self._connection = self.get_option('_uri')
self._prefix = self.get_option('_prefix')
self._timeout = self.get_option('_timeout')
If you use the ``BaseCacheModule``, you must implement the methods ``get``, ``contains``, ``keys``, ``set``, ``delete``, ``flush``, and ``copy``. The ``contains`` method should return a boolean that indicates if the key exists and has not expired. Unlike file-based caches, the ``get`` method does not raise a KeyError if the cache has expired.
If you use the ``BaseFileCacheModule``, you must implement ``_load`` and ``_dump`` methods that will be called from the base class methods ``get`` and ``set``.
If your cache plugin stores JSON, use ``AnsibleJSONEncoder`` in the ``_dump`` or ``set`` method and ``AnsibleJSONDecoder`` in the ``_load`` or ``get`` method.
For example cache plugins, see the source code for the `cache plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/cache>`_.
.. _developing_callbacks:
Callback plugins
----------------
Callback plugins add new behaviors to Ansible when responding to events. By default, callback plugins control most of the output you see when running the command line programs.
To create a callback plugin, create a new class with the Base(Callbacks) class as the parent:
.. code-block:: python
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
pass
From there, override the specific methods from the CallbackBase that you want to provide a callback for.
For plugins intended for use with Ansible version 2.0 and later, you should only override methods that start with ``v2``.
For a complete list of methods that you can override, please see ``__init__.py`` in the
`lib/ansible/plugins/callback <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_ directory.
The following is a modified example of how Ansible's timer plugin is implemented,
but with an extra option so you can see how configuration works in Ansible version 2.4 and later:
.. code-block:: python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# not only visible to ansible-doc, it also 'declares' the options the plugin requires and how to configure them.
DOCUMENTATION = '''
callback: timer
callback_type: aggregate
requirements:
- enable in configuration
short_description: Adds time to play stats
version_added: "2.0" # for collections, use the collection version, not the Ansible version
description:
- This callback just adds total play duration to the play stats.
options:
format_string:
description: format of the string shown to user at play end
ini:
- section: callback_timer
key: format_string
env:
- name: ANSIBLE_CALLBACK_TIMER_FORMAT
default: "Playbook run took %s days, %s hours, %s minutes, %s seconds"
'''
from datetime import datetime
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
This callback module tells you how long your plays ran for.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'namespace.collection_name.timer'
# only needed if you ship it and don't want to enable by default
CALLBACK_NEEDS_ENABLED = True
def __init__(self):
# make sure the expected objects are present, calling the base's __init__
super(CallbackModule, self).__init__()
# start the timer when the plugin is loaded, the first play should start a few milliseconds after.
self.start_time = datetime.now()
def _days_hours_minutes_seconds(self, runtime):
''' internal helper method for this callback '''
minutes = (runtime.seconds // 60) % 60
r_seconds = runtime.seconds - (minutes * 60)
return runtime.days, runtime.seconds // 3600, minutes, r_seconds
# this is only event we care about for display, when the play shows its summary stats; the rest are ignored by the base class
def v2_playbook_on_stats(self, stats):
end_time = datetime.now()
runtime = end_time - self.start_time
# Shows the usage of a config option declared in the DOCUMENTATION variable. Ansible will have set it when it loads the plugin.
# Also note the use of the display object to print to screen. This is available to all callbacks, and you should use this over printing yourself
self._display.display(self._plugin_options['format_string'] % (self._days_hours_minutes_seconds(runtime)))
Note that the ``CALLBACK_VERSION`` and ``CALLBACK_NAME`` definitions are required for properly functioning plugins for Ansible version 2.0 and later. ``CALLBACK_TYPE`` is mostly needed to distinguish 'stdout' plugins from the rest, since you can only load one plugin that writes to stdout.
For example callback plugins, see the source code for the `callback plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_
New in ansible-core 2.11, callback plugins are notified (via ``v2_playbook_on_task_start``) of :ref:`meta<meta_module>` tasks. By default, only explicit ``meta`` tasks that users list in their plays are sent to callbacks.
There are also some tasks which are generated internally and implicitly at various points in execution. Callback plugins can opt-in to receiving these implicit tasks as well, by setting ``self.wants_implicit_tasks = True``. Any ``Task`` object received by a callback hook will have an ``.implicit`` attribute, which can be consulted to determine whether the ``Task`` originated from within Ansible, or explicitly by the user.
.. _developing_connection_plugins:
Connection plugins
------------------
Connection plugins allow Ansible to connect to the target hosts so it can execute tasks on them. Ansible ships with many connection plugins, but only one can be used per host at a time. The most commonly used connection plugins are the ``paramiko`` SSH, native ssh (just called ``ssh``), and ``local`` connection types. All of these can be used in playbooks and with ``/usr/bin/ansible`` to connect to remote machines.
Ansible version 2.1 introduced the ``smart`` connection plugin. The ``smart`` connection type allows Ansible to automatically select either the ``paramiko`` or ``openssh`` connection plugin based on system capabilities, or the ``ssh`` connection plugin if OpenSSH supports ControlPersist.
To create a new connection plugin (for example, to support SNMP, Message bus, or other transports), copy the format of one of the existing connection plugins and drop it into ``connection`` directory on your :ref:`local plugin path <local_plugins>`.
Connection plugins can support common options (such as the ``--timeout`` flag) by defining an entry in the documentation for the attribute name (in this case ``timeout``). If the common option has a non-null default, the plugin should define the same default since a different default would be ignored.
For example connection plugins, see the source code for the `connection plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/connection>`_.
.. _developing_filter_plugins:
Filter plugins
--------------
Filter plugins manipulate data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the filter plugins shipped with Ansible reside in a ``core.py``.
Filter plugins do not use the standard configuration and documentation system described above.
For example filter plugins, see the source code for the `filter plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/filter>`_.
.. _developing_inventory_plugins:
Inventory plugins
-----------------
Inventory plugins parse inventory sources and form an in-memory representation of the inventory. Inventory plugins were added in Ansible version 2.4.
You can see the details for inventory plugins in the :ref:`developing_inventory` page.
.. _developing_lookup_plugins:
Lookup plugins
--------------
Lookup plugins pull in data from external data stores. Lookup plugins can be used within playbooks both for looping --- playbook language constructs like ``with_fileglob`` and ``with_items`` are implemented via lookup plugins --- and to return values into a variable or parameter.
Lookup plugins are very flexible, allowing you to retrieve and return any type of data. When writing lookup plugins, always return data of a consistent type that can be easily consumed in a playbook. Avoid parameters that change the returned data type. If there is a need to return a single value sometimes and a complex dictionary other times, write two different lookup plugins.
Ansible includes many :ref:`filters <playbooks_filters>` which can be used to manipulate the data returned by a lookup plugin. Sometimes it makes sense to do the filtering inside the lookup plugin, other times it is better to return results that can be filtered in the playbook. Keep in mind how the data will be referenced when determining the appropriate level of filtering to be done inside the lookup plugin.
Here's a simple lookup plugin implementation --- this lookup returns the contents of a text file as a variable:
.. code-block:: python
# python 3 headers, required if submitting to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: file
author: Daniel Hokka Zakrisson <[email protected]>
version_added: "0.9" # for collections, use the collection version, not the Ansible version
short_description: read file contents
description:
- This lookup returns the contents from a file on the Ansible controller's file system.
options:
_terms:
description: path(s) of files to read
required: True
notes:
- if read in variable context, the file can be interpreted as YAML if the content is valid to the parser.
- this lookup does not understand globing --- use the fileglob lookup instead.
"""
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
display = Display()
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
# lookups in general are expected to both take a list as input and output a list
# this is done so they work with the looping construct 'with_'.
ret = []
for term in terms:
display.debug("File lookup term: %s" % term)
# Find the file in the expected search path, using a class method
# that implements the 'expected' search path for Ansible plugins.
lookupfile = self.find_file_in_search_path(variables, 'files', term)
# Don't use print or your own logging, the display class
# takes care of it in a unified way.
display.vvvv(u"File lookup using %s as file" % lookupfile)
try:
if lookupfile:
contents, show_data = self._loader._get_file_contents(lookupfile)
ret.append(contents.rstrip())
else:
# Always use ansible error classes to throw 'final' exceptions,
# so the Ansible engine will know how to deal with them.
# The Parser error indicates invalid options passed
raise AnsibleParserError()
except AnsibleParserError:
raise AnsibleError("could not locate file in lookup: %s" % term)
return ret
The following is an example of how this lookup is called:
.. code-block:: YAML
---
- hosts: all
vars:
contents: "{{ lookup('namespace.collection_name.file', '/etc/foo.txt') }}"
tasks:
- debug:
msg: the value of foo.txt is {{ contents }} as seen today {{ lookup('pipe', 'date +"%Y-%m-%d"') }}
For example lookup plugins, see the source code for the `lookup plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/lookup>`_.
For more usage examples of lookup plugins, see :ref:`Using Lookups<playbooks_lookups>`.
.. _developing_test_plugins:
Test plugins
------------
Test plugins verify data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the test plugins shipped with Ansible reside in a ``core.py``. These are specially useful in conjunction with some filter plugins like ``map`` and ``select``; they are also available for conditional directives like ``when:``.
Test plugins do not use the standard configuration and documentation system described above.
For example test plugins, see the source code for the `test plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/test>`_.
.. _developing_vars_plugins:
Vars plugins
------------
Vars plugins inject additional variable data into Ansible runs that did not come from an inventory source, playbook, or command line. Playbook constructs like 'host_vars' and 'group_vars' work using vars plugins.
Vars plugins were partially implemented in Ansible 2.0 and rewritten to be fully implemented starting with Ansible 2.4. Vars plugins are unsupported by collections.
Older plugins used a ``run`` method as their main body/work:
.. code-block:: python
def run(self, name, vault_password=None):
pass # your code goes here
Ansible 2.0 did not pass passwords to older plugins, so vaults were unavailable.
Most of the work now happens in the ``get_vars`` method which is called from the VariableManager when needed.
.. code-block:: python
def get_vars(self, loader, path, entities):
pass # your code goes here
The parameters are:
* loader: Ansible's DataLoader. The DataLoader can read files, auto-load JSON/YAML and decrypt vaulted data, and cache read files.
* path: this is 'directory data' for every inventory source and the current play's playbook directory, so they can search for data in reference to them. ``get_vars`` will be called at least once per available path.
* entities: these are host or group names that are pertinent to the variables needed. The plugin will get called once for hosts and again for groups.
This ``get_vars`` method just needs to return a dictionary structure with the variables.
Since Ansible version 2.4, vars plugins only execute as needed when preparing to execute a task. This avoids the costly 'always execute' behavior that occurred during inventory construction in older versions of Ansible. Since Ansible version 2.10, vars plugin execution can be toggled by the user to run when preparing to execute a task or after importing an inventory source.
You can create vars plugins that are not enabled by default using the class variable ``REQUIRES_ENABLED``. If your vars plugin resides in a collection, it cannot be enabled by default. You must use ``REQUIRES_ENABLED`` in all collections-based vars plugins. To require users to enable your plugin, set the class variable ``REQUIRES_ENABLED``:
.. code-block:: python
class VarsModule(BaseVarsPlugin):
REQUIRES_ENABLED = True
Include the ``vars_plugin_staging`` documentation fragment to allow users to determine when vars plugins run.
.. code-block:: python
DOCUMENTATION = '''
vars: custom_hostvars
version_added: "2.10" # for collections, use the collection version, not the Ansible version
short_description: Load custom host vars
description: Load custom host vars
options:
stage:
ini:
- key: stage
section: vars_custom_hostvars
env:
- name: ANSIBLE_VARS_PLUGIN_STAGE
extends_documentation_fragment:
- vars_plugin_staging
'''
For example vars plugins, see the source code for the `vars plugins included with Ansible Core
<https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/vars>`_.
.. seealso::
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`developing_api`
Learn about the Python API for task execution
:ref:`developing_inventory`
Learn about how to develop dynamic inventory sources
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.libera.chat <https://libera.chat/>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,657 |
All 'hardware/*.py' fact gathering silently fails when get_mount_info() throws an exception
|
### Summary
When `hardware/linux.py` is gathering facts, if `get_mount_info()` (inside `get_mount_facts()`) fails and `res.successful()` returns false, then it tries to call `res.get()` on the result:
https://github.com/ansible/ansible/blob/d1c49f2e1c991b56060267b38751dad4ba703bdb/lib/ansible/module_utils/facts/hardware/linux.py#L576-L577
However, `multiprocessing.pool.AsyncResult.get()` is defined to reraise the exception, not return it: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.AsyncResult.get
This causes the entire `hardware` fact gathering to fail due to the reraised exception. Additionally, this exception gets eaten by the following handler. The `sys.stderr.write` doesn't seem to print to the controller's terminal. I had to remove it in order to see the stack trace.
https://github.com/ansible/ansible/blob/d1c49f2e1c991b56060267b38751dad4ba703bdb/lib/ansible/module_utils/facts/ansible_collector.py#L75-L77
The end result is that whenever `get_mount_info()` on the async pool fails, the exception is reraised outside of the `multiprocessing` context and _all_ of the `hardware/*.py` facts get silently dropped.
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
ansible 2.10.9
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, May 6 2021, 00:05:59) [GCC 10.2.1 20201203]
```
### Configuration
```console
(no output)
```
### OS / Environment
Controller:
* Alpine Linux 3.13.5
* Python 3.8
All target hosts:
* CentOS 7
* Python 2.7
### Steps to Reproduce
```yaml
- name: Gather hardware facts
setup:
gather_subset: '!all,!min,hardware'
- name: DEBUG - Dump host vars state if fact gathering does not include mount information
debug:
var: ansible_facts
when: ansible_mounts is not defined
```
With this playbook, it may take many reruns to reproduce the issue (need `get_mount_info()` to fail). It may be easier to simply edit `hardware/linux.py` to make `get_mount_info()` always throw an exception since the issue is in the error handling code.
### Expected Results
When `get_mount_info()` fails, the exception information should show up in the `['info']['note']` field of the mountpoint fact. (Additionally, I think that any hard failures during fact gathering should have some error printed to the screen instead of being silently ignored.)
### Actual Results
```console
When `get_mount_info()` fails, all `hardware` facts are silently ignored/dropped.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74657
|
https://github.com/ansible/ansible/pull/74714
|
8d39332c3dbe21363a7f6779584495265c585d72
|
ba2b1a6bf6f4a1d5ef975789caa380e72b0a4e77
| 2021-05-11T15:11:25Z |
python
| 2021-06-01T18:52:22Z |
changelogs/fragments/linux_hw_facts_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,657 |
All 'hardware/*.py' fact gathering silently fails when get_mount_info() throws an exception
|
### Summary
When `hardware/linux.py` is gathering facts, if `get_mount_info()` (inside `get_mount_facts()`) fails and `res.successful()` returns false, then it tries to call `res.get()` on the result:
https://github.com/ansible/ansible/blob/d1c49f2e1c991b56060267b38751dad4ba703bdb/lib/ansible/module_utils/facts/hardware/linux.py#L576-L577
However, `multiprocessing.pool.AsyncResult.get()` is defined to reraise the exception, not return it: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.AsyncResult.get
This causes the entire `hardware` fact gathering to fail due to the reraised exception. Additionally, this exception gets eaten by the following handler. The `sys.stderr.write` doesn't seem to print to the controller's terminal. I had to remove it in order to see the stack trace.
https://github.com/ansible/ansible/blob/d1c49f2e1c991b56060267b38751dad4ba703bdb/lib/ansible/module_utils/facts/ansible_collector.py#L75-L77
The end result is that whenever `get_mount_info()` on the async pool fails, the exception is reraised outside of the `multiprocessing` context and _all_ of the `hardware/*.py` facts get silently dropped.
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
ansible 2.10.9
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, May 6 2021, 00:05:59) [GCC 10.2.1 20201203]
```
### Configuration
```console
(no output)
```
### OS / Environment
Controller:
* Alpine Linux 3.13.5
* Python 3.8
All target hosts:
* CentOS 7
* Python 2.7
### Steps to Reproduce
```yaml
- name: Gather hardware facts
setup:
gather_subset: '!all,!min,hardware'
- name: DEBUG - Dump host vars state if fact gathering does not include mount information
debug:
var: ansible_facts
when: ansible_mounts is not defined
```
With this playbook, it may take many reruns to reproduce the issue (need `get_mount_info()` to fail). It may be easier to simply edit `hardware/linux.py` to make `get_mount_info()` always throw an exception since the issue is in the error handling code.
### Expected Results
When `get_mount_info()` fails, the exception information should show up in the `['info']['note']` field of the mountpoint fact. (Additionally, I think that any hard failures during fact gathering should have some error printed to the screen instead of being silently ignored.)
### Actual Results
```console
When `get_mount_info()` fails, all `hardware` facts are silently ignored/dropped.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74657
|
https://github.com/ansible/ansible/pull/74714
|
8d39332c3dbe21363a7f6779584495265c585d72
|
ba2b1a6bf6f4a1d5ef975789caa380e72b0a4e77
| 2021-05-11T15:11:25Z |
python
| 2021-06-01T18:52:22Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import errno
import glob
import json
import os
import re
import sys
import time
from multiprocessing import cpu_count
from multiprocessing.pool import ThreadPool
from ansible.module_utils._text import to_text
from ansible.module_utils.six import iteritems
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size
# import this as a module to ensure we get the same module instance
from ansible.module_utils.facts import timeout
def get_partition_uuid(partname):
try:
uuids = os.listdir("/dev/disk/by-uuid")
except OSError:
return
for uuid in uuids:
dev = os.path.realpath("/dev/disk/by-uuid/" + uuid)
if dev == ("/dev/" + partname):
return uuid
return None
class LinuxHardware(Hardware):
"""
Linux-specific subclass of Hardware. Defines memory and CPU facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
In addition, it also defines number of DMI facts and device facts.
"""
platform = 'Linux'
# Originally only had these four as toplevelfacts
ORIGINAL_MEMORY_FACTS = frozenset(('MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'))
# Now we have all of these in a dict structure
MEMORY_FACTS = ORIGINAL_MEMORY_FACTS.union(('Buffers', 'Cached', 'SwapCached'))
# regex used against findmnt output to detect bind mounts
BIND_MOUNT_RE = re.compile(r'.*\]')
# regex used against mtab content to find entries that are bind mounts
MTAB_BIND_MOUNT_RE = re.compile(r'.*bind.*"')
# regex used for replacing octal escape sequences
OCTAL_ESCAPE_RE = re.compile(r'\\[0-9]{3}')
def populate(self, collected_facts=None):
hardware_facts = {}
self.module.run_command_environ_update = {'LANG': 'C', 'LC_ALL': 'C', 'LC_NUMERIC': 'C'}
cpu_facts = self.get_cpu_facts(collected_facts=collected_facts)
memory_facts = self.get_memory_facts()
dmi_facts = self.get_dmi_facts()
device_facts = self.get_device_facts()
uptime_facts = self.get_uptime_facts()
lvm_facts = self.get_lvm_facts()
mount_facts = {}
try:
mount_facts = self.get_mount_facts()
except timeout.TimeoutError:
pass
hardware_facts.update(cpu_facts)
hardware_facts.update(memory_facts)
hardware_facts.update(dmi_facts)
hardware_facts.update(device_facts)
hardware_facts.update(uptime_facts)
hardware_facts.update(lvm_facts)
hardware_facts.update(mount_facts)
return hardware_facts
def get_memory_facts(self):
memory_facts = {}
if not os.access("/proc/meminfo", os.R_OK):
return memory_facts
memstats = {}
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in self.ORIGINAL_MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memory_facts["%s_mb" % key.lower()] = int(val) // 1024
if key in self.MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memstats[key.lower()] = int(val) // 1024
if None not in (memstats.get('memtotal'), memstats.get('memfree')):
memstats['real:used'] = memstats['memtotal'] - memstats['memfree']
if None not in (memstats.get('cached'), memstats.get('memfree'), memstats.get('buffers')):
memstats['nocache:free'] = memstats['cached'] + memstats['memfree'] + memstats['buffers']
if None not in (memstats.get('memtotal'), memstats.get('nocache:free')):
memstats['nocache:used'] = memstats['memtotal'] - memstats['nocache:free']
if None not in (memstats.get('swaptotal'), memstats.get('swapfree')):
memstats['swap:used'] = memstats['swaptotal'] - memstats['swapfree']
memory_facts['memory_mb'] = {
'real': {
'total': memstats.get('memtotal'),
'used': memstats.get('real:used'),
'free': memstats.get('memfree'),
},
'nocache': {
'free': memstats.get('nocache:free'),
'used': memstats.get('nocache:used'),
},
'swap': {
'total': memstats.get('swaptotal'),
'free': memstats.get('swapfree'),
'used': memstats.get('swap:used'),
'cached': memstats.get('swapcached'),
},
}
return memory_facts
def get_cpu_facts(self, collected_facts=None):
cpu_facts = {}
collected_facts = collected_facts or {}
i = 0
vendor_id_occurrence = 0
model_name_occurrence = 0
processor_occurence = 0
physid = 0
coreid = 0
sockets = {}
cores = {}
xen = False
xen_paravirt = False
try:
if os.path.exists('/proc/xen'):
xen = True
else:
for line in get_file_lines('/sys/hypervisor/type'):
if line.strip() == 'xen':
xen = True
# Only interested in the first line
break
except IOError:
pass
if not os.access("/proc/cpuinfo", os.R_OK):
return cpu_facts
cpu_facts['processor'] = []
for line in get_file_lines('/proc/cpuinfo'):
data = line.split(":", 1)
key = data[0].strip()
try:
val = data[1].strip()
except IndexError:
val = ""
if xen:
if key == 'flags':
# Check for vme cpu flag, Xen paravirt does not expose this.
# Need to detect Xen paravirt because it exposes cpuinfo
# differently than Xen HVM or KVM and causes reporting of
# only a single cpu core.
if 'vme' not in val:
xen_paravirt = True
# model name is for Intel arch, Processor (mind the uppercase P)
# works for some ARM devices, like the Sheevaplug.
# 'ncpus active' is SPARC attribute
if key in ['model name', 'Processor', 'vendor_id', 'cpu', 'Vendor', 'processor']:
if 'processor' not in cpu_facts:
cpu_facts['processor'] = []
cpu_facts['processor'].append(val)
if key == 'vendor_id':
vendor_id_occurrence += 1
if key == 'model name':
model_name_occurrence += 1
if key == 'processor':
processor_occurence += 1
i += 1
elif key == 'physical id':
physid = val
if physid not in sockets:
sockets[physid] = 1
elif key == 'core id':
coreid = val
if coreid not in sockets:
cores[coreid] = 1
elif key == 'cpu cores':
sockets[physid] = int(val)
elif key == 'siblings':
cores[coreid] = int(val)
elif key == '# processors':
cpu_facts['processor_cores'] = int(val)
elif key == 'ncpus active':
i = int(val)
# Skip for platforms without vendor_id/model_name in cpuinfo (e.g ppc64le)
if vendor_id_occurrence > 0:
if vendor_id_occurrence == model_name_occurrence:
i = vendor_id_occurrence
# The fields for ARM CPUs do not always include 'vendor_id' or 'model name',
# and sometimes includes both 'processor' and 'Processor'.
# The fields for Power CPUs include 'processor' and 'cpu'.
# Always use 'processor' count for ARM and Power systems
if collected_facts.get('ansible_architecture', '').startswith(('armv', 'aarch', 'ppc')):
i = processor_occurence
# FIXME
if collected_facts.get('ansible_architecture') != 's390x':
if xen_paravirt:
cpu_facts['processor_count'] = i
cpu_facts['processor_cores'] = i
cpu_facts['processor_threads_per_core'] = 1
cpu_facts['processor_vcpus'] = i
else:
if sockets:
cpu_facts['processor_count'] = len(sockets)
else:
cpu_facts['processor_count'] = i
socket_values = list(sockets.values())
if socket_values and socket_values[0]:
cpu_facts['processor_cores'] = socket_values[0]
else:
cpu_facts['processor_cores'] = 1
core_values = list(cores.values())
if core_values:
cpu_facts['processor_threads_per_core'] = core_values[0] // cpu_facts['processor_cores']
else:
cpu_facts['processor_threads_per_core'] = 1 // cpu_facts['processor_cores']
cpu_facts['processor_vcpus'] = (cpu_facts['processor_threads_per_core'] *
cpu_facts['processor_count'] * cpu_facts['processor_cores'])
# if the number of processors available to the module's
# thread cannot be determined, the processor count
# reported by /proc will be the default:
cpu_facts['processor_nproc'] = processor_occurence
try:
cpu_facts['processor_nproc'] = len(
os.sched_getaffinity(0)
)
except AttributeError:
# In Python < 3.3, os.sched_getaffinity() is not available
try:
cmd = get_bin_path('nproc')
except ValueError:
pass
else:
rc, out, _err = self.module.run_command(cmd)
if rc == 0:
cpu_facts['processor_nproc'] = int(out)
return cpu_facts
def get_dmi_facts(self):
''' learn dmi facts from system
Try /sys first for dmi related facts.
If that is not available, fall back to dmidecode executable '''
dmi_facts = {}
if os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
# Use kernel DMI info, if available
# DMI SPEC -- https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.2.0.pdf
FORM_FACTOR = ["Unknown", "Other", "Unknown", "Desktop",
"Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower",
"Portable", "Laptop", "Notebook", "Hand Held", "Docking Station",
"All In One", "Sub Notebook", "Space-saving", "Lunch Box",
"Main Server Chassis", "Expansion Chassis", "Sub Chassis",
"Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis",
"Rack Mount Chassis", "Sealed-case PC", "Multi-system",
"CompactPCI", "AdvancedTCA", "Blade", "Blade Enclosure",
"Tablet", "Convertible", "Detachable", "IoT Gateway",
"Embedded PC", "Mini PC", "Stick PC"]
DMI_DICT = {
'bios_date': '/sys/devices/virtual/dmi/id/bios_date',
'bios_vendor': '/sys/devices/virtual/dmi/id/bios_vendor',
'bios_version': '/sys/devices/virtual/dmi/id/bios_version',
'board_asset_tag': '/sys/devices/virtual/dmi/id/board_asset_tag',
'board_name': '/sys/devices/virtual/dmi/id/board_name',
'board_serial': '/sys/devices/virtual/dmi/id/board_serial',
'board_vendor': '/sys/devices/virtual/dmi/id/board_vendor',
'board_version': '/sys/devices/virtual/dmi/id/board_version',
'chassis_asset_tag': '/sys/devices/virtual/dmi/id/chassis_asset_tag',
'chassis_serial': '/sys/devices/virtual/dmi/id/chassis_serial',
'chassis_vendor': '/sys/devices/virtual/dmi/id/chassis_vendor',
'chassis_version': '/sys/devices/virtual/dmi/id/chassis_version',
'form_factor': '/sys/devices/virtual/dmi/id/chassis_type',
'product_name': '/sys/devices/virtual/dmi/id/product_name',
'product_serial': '/sys/devices/virtual/dmi/id/product_serial',
'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid',
'product_version': '/sys/devices/virtual/dmi/id/product_version',
'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor',
}
for (key, path) in DMI_DICT.items():
data = get_file_content(path)
if data is not None:
if key == 'form_factor':
try:
dmi_facts['form_factor'] = FORM_FACTOR[int(data)]
except IndexError:
dmi_facts['form_factor'] = 'unknown (%s)' % data
else:
dmi_facts[key] = data
else:
dmi_facts[key] = 'NA'
else:
# Fall back to using dmidecode, if available
dmi_bin = self.module.get_bin_path('dmidecode')
DMI_DICT = {
'bios_date': 'bios-release-date',
'bios_vendor': 'bios-vendor',
'bios_version': 'bios-version',
'board_asset_tag': 'baseboard-asset-tag',
'board_name': 'baseboard-product-name',
'board_serial': 'baseboard-serial-number',
'board_vendor': 'baseboard-manufacturer',
'board_version': 'baseboard-version',
'chassis_asset_tag': 'chassis-asset-tag',
'chassis_serial': 'chassis-serial-number',
'chassis_vendor': 'chassis-manufacturer',
'chassis_version': 'chassis-version',
'form_factor': 'chassis-type',
'product_name': 'system-product-name',
'product_serial': 'system-serial-number',
'product_uuid': 'system-uuid',
'product_version': 'system-version',
'system_vendor': 'system-manufacturer',
}
for (k, v) in DMI_DICT.items():
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s %s' % (dmi_bin, v))
if rc == 0:
# Strip out commented lines (specific dmidecode output)
thisvalue = ''.join([line for line in out.splitlines() if not line.startswith('#')])
try:
json.dumps(thisvalue)
except UnicodeDecodeError:
thisvalue = "NA"
dmi_facts[k] = thisvalue
else:
dmi_facts[k] = 'NA'
else:
dmi_facts[k] = 'NA'
return dmi_facts
def _run_lsblk(self, lsblk_path):
# call lsblk and collect all uuids
# --exclude 2 makes lsblk ignore floppy disks, which are slower to answer than typical timeouts
# this uses the linux major device number
# for details see https://www.kernel.org/doc/Documentation/devices.txt
args = ['--list', '--noheadings', '--paths', '--output', 'NAME,UUID', '--exclude', '2']
cmd = [lsblk_path] + args
rc, out, err = self.module.run_command(cmd)
return rc, out, err
def _lsblk_uuid(self):
uuids = {}
lsblk_path = self.module.get_bin_path("lsblk")
if not lsblk_path:
return uuids
rc, out, err = self._run_lsblk(lsblk_path)
if rc != 0:
return uuids
# each line will be in format:
# <devicename><some whitespace><uuid>
# /dev/sda1 32caaec3-ef40-4691-a3b6-438c3f9bc1c0
for lsblk_line in out.splitlines():
if not lsblk_line:
continue
line = lsblk_line.strip()
fields = line.rsplit(None, 1)
if len(fields) < 2:
continue
device_name, uuid = fields[0].strip(), fields[1].strip()
if device_name in uuids:
continue
uuids[device_name] = uuid
return uuids
def _udevadm_uuid(self, device):
# fallback for versions of lsblk <= 2.23 that don't have --paths, see _run_lsblk() above
uuid = 'N/A'
udevadm_path = self.module.get_bin_path('udevadm')
if not udevadm_path:
return uuid
cmd = [udevadm_path, 'info', '--query', 'property', '--name', device]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
return uuid
# a snippet of the output of the udevadm command below will be:
# ...
# ID_FS_TYPE=ext4
# ID_FS_USAGE=filesystem
# ID_FS_UUID=57b1a3e7-9019-4747-9809-7ec52bba9179
# ...
m = re.search('ID_FS_UUID=(.*)\n', out)
if m:
uuid = m.group(1)
return uuid
def _run_findmnt(self, findmnt_path):
args = ['--list', '--noheadings', '--notruncate']
cmd = [findmnt_path] + args
rc, out, err = self.module.run_command(cmd, errors='surrogate_then_replace')
return rc, out, err
def _find_bind_mounts(self):
bind_mounts = set()
findmnt_path = self.module.get_bin_path("findmnt")
if not findmnt_path:
return bind_mounts
rc, out, err = self._run_findmnt(findmnt_path)
if rc != 0:
return bind_mounts
# find bind mounts, in case /etc/mtab is a symlink to /proc/mounts
for line in out.splitlines():
fields = line.split()
# fields[0] is the TARGET, fields[1] is the SOURCE
if len(fields) < 2:
continue
# bind mounts will have a [/directory_name] in the SOURCE column
if self.BIND_MOUNT_RE.match(fields[1]):
bind_mounts.add(fields[0])
return bind_mounts
def _mtab_entries(self):
mtab_file = '/etc/mtab'
if not os.path.exists(mtab_file):
mtab_file = '/proc/mounts'
mtab = get_file_content(mtab_file, '')
mtab_entries = []
for line in mtab.splitlines():
fields = line.split()
if len(fields) < 4:
continue
mtab_entries.append(fields)
return mtab_entries
@staticmethod
def _replace_octal_escapes_helper(match):
# Convert to integer using base8 and then convert to character
return chr(int(match.group()[1:], 8))
def _replace_octal_escapes(self, value):
return self.OCTAL_ESCAPE_RE.sub(self._replace_octal_escapes_helper, value)
def get_mount_info(self, mount, device, uuids):
mount_size = get_mount_size(mount)
# _udevadm_uuid is a fallback for versions of lsblk <= 2.23 that don't have --paths
# see _run_lsblk() above
# https://github.com/ansible/ansible/issues/36077
uuid = uuids.get(device, self._udevadm_uuid(device))
return mount_size, uuid
def get_mount_facts(self):
mounts = []
# gather system lists
bind_mounts = self._find_bind_mounts()
uuids = self._lsblk_uuid()
mtab_entries = self._mtab_entries()
# start threads to query each mount
results = {}
pool = ThreadPool(processes=min(len(mtab_entries), cpu_count()))
maxtime = globals().get('GATHER_TIMEOUT') or timeout.DEFAULT_GATHER_TIMEOUT
for fields in mtab_entries:
# Transform octal escape sequences
fields = [self._replace_octal_escapes(field) for field in fields]
device, mount, fstype, options = fields[0], fields[1], fields[2], fields[3]
if not device.startswith(('/', '\\')) and ':/' not in device or fstype == 'none':
continue
mount_info = {'mount': mount,
'device': device,
'fstype': fstype,
'options': options}
if mount in bind_mounts:
# only add if not already there, we might have a plain /etc/mtab
if not self.MTAB_BIND_MOUNT_RE.match(options):
mount_info['options'] += ",bind"
results[mount] = {'info': mount_info,
'extra': pool.apply_async(self.get_mount_info, (mount, device, uuids)),
'timelimit': time.time() + maxtime}
pool.close() # done with new workers, start gc
# wait for workers and get results
while results:
for mount in results:
res = results[mount]['extra']
if res.ready():
if res.successful():
mount_size, uuid = res.get()
if mount_size:
results[mount]['info'].update(mount_size)
results[mount]['info']['uuid'] = uuid or 'N/A'
else:
# give incomplete data
errmsg = to_text(res.get())
self.module.warn("Error prevented getting extra info for mount %s: %s." % (mount, errmsg))
results[mount]['info']['note'] = 'Could not get extra information: %s.' % (errmsg)
mounts.append(results[mount]['info'])
del results[mount]
break
elif time.time() > results[mount]['timelimit']:
results[mount]['info']['note'] = 'Timed out while attempting to get extra information.'
mounts.append(results[mount]['info'])
del results[mount]
break
else:
# avoid cpu churn
time.sleep(0.1)
return {'mounts': mounts}
def get_device_links(self, link_dir):
if not os.path.exists(link_dir):
return {}
try:
retval = collections.defaultdict(set)
for entry in os.listdir(link_dir):
try:
target = os.path.basename(os.readlink(os.path.join(link_dir, entry)))
retval[target].add(entry)
except OSError:
continue
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_owners(self):
try:
retval = collections.defaultdict(set)
for path in glob.glob('/sys/block/*/slaves/*'):
elements = path.split('/')
device = elements[3]
target = elements[5]
retval[target].add(device)
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_links(self):
return {
'ids': self.get_device_links('/dev/disk/by-id'),
'uuids': self.get_device_links('/dev/disk/by-uuid'),
'labels': self.get_device_links('/dev/disk/by-label'),
'masters': self.get_all_device_owners(),
}
def get_holders(self, block_dev_dict, sysdir):
block_dev_dict['holders'] = []
if os.path.isdir(sysdir + "/holders"):
for folder in os.listdir(sysdir + "/holders"):
if not folder.startswith("dm-"):
continue
name = get_file_content(sysdir + "/holders/" + folder + "/dm/name")
if name:
block_dev_dict['holders'].append(name)
else:
block_dev_dict['holders'].append(folder)
def get_device_facts(self):
device_facts = {}
device_facts['devices'] = {}
lspci = self.module.get_bin_path('lspci')
if lspci:
rc, pcidata, err = self.module.run_command([lspci, '-D'], errors='surrogate_then_replace')
else:
pcidata = None
try:
block_devs = os.listdir("/sys/block")
except OSError:
return device_facts
devs_wwn = {}
try:
devs_by_id = os.listdir("/dev/disk/by-id")
except OSError:
pass
else:
for link_name in devs_by_id:
if link_name.startswith("wwn-"):
try:
wwn_link = os.readlink(os.path.join("/dev/disk/by-id", link_name))
except OSError:
continue
devs_wwn[os.path.basename(wwn_link)] = link_name[4:]
links = self.get_all_device_links()
device_facts['device_links'] = links
for block in block_devs:
virtual = 1
sysfs_no_links = 0
try:
path = os.readlink(os.path.join("/sys/block/", block))
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.EINVAL:
path = block
sysfs_no_links = 1
else:
continue
sysdir = os.path.join("/sys/block", path)
if sysfs_no_links == 1:
for folder in os.listdir(sysdir):
if "device" in folder:
virtual = 0
break
d = {}
d['virtual'] = virtual
d['links'] = {}
for (link_type, link_values) in iteritems(links):
d['links'][link_type] = link_values.get(block, [])
diskname = os.path.basename(sysdir)
for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']:
d[key] = get_file_content(sysdir + "/device/" + key)
sg_inq = self.module.get_bin_path('sg_inq')
# we can get NVMe device's serial number from /sys/block/<name>/device/serial
serial_path = "/sys/block/%s/device/serial" % (block)
if sg_inq:
device = "/dev/%s" % (block)
rc, drivedata, err = self.module.run_command([sg_inq, device])
if rc == 0:
serial = re.search(r"Unit serial number:\s+(\w+)", drivedata)
if serial:
d['serial'] = serial.group(1)
else:
serial = get_file_content(serial_path)
if serial:
d['serial'] = serial
for key, test in [('removable', '/removable'),
('support_discard', '/queue/discard_granularity'),
]:
d[key] = get_file_content(sysdir + test)
if diskname in devs_wwn:
d['wwn'] = devs_wwn[diskname]
d['partitions'] = {}
for folder in os.listdir(sysdir):
m = re.search("(" + diskname + r"[p]?\d+)", folder)
if m:
part = {}
partname = m.group(1)
part_sysdir = sysdir + "/" + partname
part['links'] = {}
for (link_type, link_values) in iteritems(links):
part['links'][link_type] = link_values.get(partname, [])
part['start'] = get_file_content(part_sysdir + "/start", 0)
part['sectors'] = get_file_content(part_sysdir + "/size", 0)
part['sectorsize'] = get_file_content(part_sysdir + "/queue/logical_block_size")
if not part['sectorsize']:
part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size", 512)
part['size'] = bytes_to_human((float(part['sectors']) * 512.0))
part['uuid'] = get_partition_uuid(partname)
self.get_holders(part, part_sysdir)
d['partitions'][partname] = part
d['rotational'] = get_file_content(sysdir + "/queue/rotational")
d['scheduler_mode'] = ""
scheduler = get_file_content(sysdir + "/queue/scheduler")
if scheduler is not None:
m = re.match(r".*?(\[(.*)\])", scheduler)
if m:
d['scheduler_mode'] = m.group(2)
d['sectors'] = get_file_content(sysdir + "/size")
if not d['sectors']:
d['sectors'] = 0
d['sectorsize'] = get_file_content(sysdir + "/queue/logical_block_size")
if not d['sectorsize']:
d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size", 512)
d['size'] = bytes_to_human(float(d['sectors']) * 512.0)
d['host'] = ""
# domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7).
m = re.match(r".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir)
if m and pcidata:
pciid = m.group(1)
did = re.escape(pciid)
m = re.search("^" + did + r"\s(.*)$", pcidata, re.MULTILINE)
if m:
d['host'] = m.group(1)
self.get_holders(d, sysdir)
device_facts['devices'][diskname] = d
return device_facts
def get_uptime_facts(self):
uptime_facts = {}
uptime_file_content = get_file_content('/proc/uptime')
if uptime_file_content:
uptime_seconds_string = uptime_file_content.split(' ')[0]
uptime_facts['uptime_seconds'] = int(float(uptime_seconds_string))
return uptime_facts
def _find_mapper_device_name(self, dm_device):
dm_prefix = '/dev/dm-'
mapper_device = dm_device
if dm_device.startswith(dm_prefix):
dmsetup_cmd = self.module.get_bin_path('dmsetup', True)
mapper_prefix = '/dev/mapper/'
rc, dm_name, err = self.module.run_command("%s info -C --noheadings -o name %s" % (dmsetup_cmd, dm_device))
if rc == 0:
mapper_device = mapper_prefix + dm_name.rstrip()
return mapper_device
def get_lvm_facts(self):
""" Get LVM Facts if running as root and lvm utils are available """
lvm_facts = {}
if os.getuid() == 0 and self.module.get_bin_path('vgs'):
lvm_util_options = '--noheadings --nosuffix --units g --separator ,'
vgs_path = self.module.get_bin_path('vgs')
# vgs fields: VG #PV #LV #SN Attr VSize VFree
vgs = {}
if vgs_path:
rc, vg_lines, err = self.module.run_command('%s %s' % (vgs_path, lvm_util_options))
for vg_line in vg_lines.splitlines():
items = vg_line.strip().split(',')
vgs[items[0]] = {'size_g': items[-2],
'free_g': items[-1],
'num_lvs': items[2],
'num_pvs': items[1]}
lvs_path = self.module.get_bin_path('lvs')
# lvs fields:
# LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvs = {}
if lvs_path:
rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options))
for lv_line in lv_lines.splitlines():
items = lv_line.strip().split(',')
lvs[items[0]] = {'size_g': items[3], 'vg': items[1]}
pvs_path = self.module.get_bin_path('pvs')
# pvs fields: PV VG #Fmt #Attr PSize PFree
pvs = {}
if pvs_path:
rc, pv_lines, err = self.module.run_command('%s %s' % (pvs_path, lvm_util_options))
for pv_line in pv_lines.splitlines():
items = pv_line.strip().split(',')
pvs[self._find_mapper_device_name(items[0])] = {
'size_g': items[4],
'free_g': items[5],
'vg': items[1]}
lvm_facts['lvm'] = {'lvs': lvs, 'vgs': vgs, 'pvs': pvs}
return lvm_facts
class LinuxHardwareCollector(HardwareCollector):
_platform = 'Linux'
_fact_class = LinuxHardware
required_facts = set(['platform'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,527 |
Ansible modify internal cache of DNF - ansible can crash anytime after release of a new version of DNF
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible in dnf.py uses a private attributes (in this case it is an internal cache for security filters - base._update_security_filters). Using private attributes could result in random fails, because they could be even removed without announcement because it is not API.
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
Also I an not sure if it works like expected (there is a difference in dnf upgrade, and dnf upgrade-minimal) and I don't know what was expected from ansible.
I also propose a new API in dnf that could resolve the issue (https://github.com/rpm-software-management/dnf/pull/1715). Any comment or suggestion is welcome
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/73527
|
https://github.com/ansible/ansible/pull/73529
|
ba2b1a6bf6f4a1d5ef975789caa380e72b0a4e77
|
d9183b8df57ceb8aa64df99326c3b5cf8aec295b
| 2021-02-08T10:59:49Z |
python
| 2021-06-01T19:32:38Z |
changelogs/fragments/dnf-security.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,527 |
Ansible modify internal cache of DNF - ansible can crash anytime after release of a new version of DNF
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible in dnf.py uses a private attributes (in this case it is an internal cache for security filters - base._update_security_filters). Using private attributes could result in random fails, because they could be even removed without announcement because it is not API.
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
Also I an not sure if it works like expected (there is a difference in dnf upgrade, and dnf upgrade-minimal) and I don't know what was expected from ansible.
I also propose a new API in dnf that could resolve the issue (https://github.com/rpm-software-management/dnf/pull/1715). Any comment or suggestion is welcome
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/73527
|
https://github.com/ansible/ansible/pull/73529
|
ba2b1a6bf6f4a1d5ef975789caa380e72b0a4e77
|
d9183b8df57ceb8aa64df99326c3b5cf8aec295b
| 2021-02-08T10:59:49Z |
python
| 2021-06-01T19:32:38Z |
lib/ansible/modules/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
dnf:
name: httpd>=2.4
state: present
- name: Install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}'.format(package)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
self.base.do_transaction()
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
return HAS_DNF
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
changelogs/fragments/73971-non-batch-end_play.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
lib/ansible/executor/play_iterator.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
from ansible import constants as C
from ansible.module_utils.six import iteritems
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.playbook.block import Block
from ansible.playbook.task import Task
from ansible.utils.display import Display
display = Display()
__all__ = ['PlayIterator']
class HostState:
def __init__(self, blocks):
self._blocks = blocks[:]
self.cur_block = 0
self.cur_regular_task = 0
self.cur_rescue_task = 0
self.cur_always_task = 0
self.run_state = PlayIterator.ITERATING_SETUP
self.fail_state = PlayIterator.FAILED_NONE
self.pending_setup = False
self.tasks_child_state = None
self.rescue_child_state = None
self.always_child_state = None
self.did_rescue = False
self.did_start_at_task = False
def __repr__(self):
return "HostState(%r)" % self._blocks
def __str__(self):
def _run_state_to_string(n):
states = ["ITERATING_SETUP", "ITERATING_TASKS", "ITERATING_RESCUE", "ITERATING_ALWAYS", "ITERATING_COMPLETE"]
try:
return states[n]
except IndexError:
return "UNKNOWN STATE"
def _failed_state_to_string(n):
states = {1: "FAILED_SETUP", 2: "FAILED_TASKS", 4: "FAILED_RESCUE", 8: "FAILED_ALWAYS"}
if n == 0:
return "FAILED_NONE"
else:
ret = []
for i in (1, 2, 4, 8):
if n & i:
ret.append(states[i])
return "|".join(ret)
return ("HOST STATE: block=%d, task=%d, rescue=%d, always=%d, run_state=%s, fail_state=%s, pending_setup=%s, tasks child state? (%s), "
"rescue child state? (%s), always child state? (%s), did rescue? %s, did start at task? %s" % (
self.cur_block,
self.cur_regular_task,
self.cur_rescue_task,
self.cur_always_task,
_run_state_to_string(self.run_state),
_failed_state_to_string(self.fail_state),
self.pending_setup,
self.tasks_child_state,
self.rescue_child_state,
self.always_child_state,
self.did_rescue,
self.did_start_at_task,
))
def __eq__(self, other):
if not isinstance(other, HostState):
return False
for attr in ('_blocks', 'cur_block', 'cur_regular_task', 'cur_rescue_task', 'cur_always_task',
'run_state', 'fail_state', 'pending_setup',
'tasks_child_state', 'rescue_child_state', 'always_child_state'):
if getattr(self, attr) != getattr(other, attr):
return False
return True
def get_current_block(self):
return self._blocks[self.cur_block]
def copy(self):
new_state = HostState(self._blocks)
new_state.cur_block = self.cur_block
new_state.cur_regular_task = self.cur_regular_task
new_state.cur_rescue_task = self.cur_rescue_task
new_state.cur_always_task = self.cur_always_task
new_state.run_state = self.run_state
new_state.fail_state = self.fail_state
new_state.pending_setup = self.pending_setup
new_state.did_rescue = self.did_rescue
new_state.did_start_at_task = self.did_start_at_task
if self.tasks_child_state is not None:
new_state.tasks_child_state = self.tasks_child_state.copy()
if self.rescue_child_state is not None:
new_state.rescue_child_state = self.rescue_child_state.copy()
if self.always_child_state is not None:
new_state.always_child_state = self.always_child_state.copy()
return new_state
class PlayIterator:
# the primary running states for the play iteration
ITERATING_SETUP = 0
ITERATING_TASKS = 1
ITERATING_RESCUE = 2
ITERATING_ALWAYS = 3
ITERATING_COMPLETE = 4
# the failure states for the play iteration, which are powers
# of 2 as they may be or'ed together in certain circumstances
FAILED_NONE = 0
FAILED_SETUP = 1
FAILED_TASKS = 2
FAILED_RESCUE = 4
FAILED_ALWAYS = 8
def __init__(self, inventory, play, play_context, variable_manager, all_vars, start_at_done=False):
self._play = play
self._blocks = []
self._variable_manager = variable_manager
# Default options to gather
gather_subset = self._play.gather_subset
gather_timeout = self._play.gather_timeout
fact_path = self._play.fact_path
setup_block = Block(play=self._play)
# Gathering facts with run_once would copy the facts from one host to
# the others.
setup_block.run_once = False
setup_task = Task(block=setup_block)
setup_task.action = 'gather_facts'
setup_task.name = 'Gathering Facts'
setup_task.args = {
'gather_subset': gather_subset,
}
# Unless play is specifically tagged, gathering should 'always' run
if not self._play.tags:
setup_task.tags = ['always']
if gather_timeout:
setup_task.args['gather_timeout'] = gather_timeout
if fact_path:
setup_task.args['fact_path'] = fact_path
setup_task.set_loader(self._play._loader)
# short circuit fact gathering if the entire playbook is conditional
if self._play._included_conditional is not None:
setup_task.when = self._play._included_conditional[:]
setup_block.block = [setup_task]
setup_block = setup_block.filter_tagged_tasks(all_vars)
self._blocks.append(setup_block)
for block in self._play.compile():
new_block = block.filter_tagged_tasks(all_vars)
if new_block.has_tasks():
self._blocks.append(new_block)
self._host_states = {}
start_at_matched = False
batch = inventory.get_hosts(self._play.hosts, order=self._play.order)
self.batch_size = len(batch)
for host in batch:
self._host_states[host.name] = HostState(blocks=self._blocks)
# if we're looking to start at a specific task, iterate through
# the tasks for this host until we find the specified task
if play_context.start_at_task is not None and not start_at_done:
while True:
(s, task) = self.get_next_task_for_host(host, peek=True)
if s.run_state == self.ITERATING_COMPLETE:
break
if task.name == play_context.start_at_task or (task.name and fnmatch.fnmatch(task.name, play_context.start_at_task)) or \
task.get_name() == play_context.start_at_task or fnmatch.fnmatch(task.get_name(), play_context.start_at_task):
start_at_matched = True
break
else:
self.get_next_task_for_host(host)
# finally, reset the host's state to ITERATING_SETUP
if start_at_matched:
self._host_states[host.name].did_start_at_task = True
self._host_states[host.name].run_state = self.ITERATING_SETUP
if start_at_matched:
# we have our match, so clear the start_at_task field on the
# play context to flag that we've started at a task (and future
# plays won't try to advance)
play_context.start_at_task = None
def get_host_state(self, host):
# Since we're using the PlayIterator to carry forward failed hosts,
# in the event that a previous host was not in the current inventory
# we create a stub state for it now
if host.name not in self._host_states:
self._host_states[host.name] = HostState(blocks=[])
return self._host_states[host.name].copy()
def cache_block_tasks(self, block):
# now a noop, we've changed the way we do caching and finding of
# original task entries, but just in case any 3rd party strategies
# are using this we're leaving it here for now
return
def get_next_task_for_host(self, host, peek=False):
display.debug("getting the next task for host %s" % host.name)
s = self.get_host_state(host)
task = None
if s.run_state == self.ITERATING_COMPLETE:
display.debug("host %s is done iterating, returning" % host.name)
return (s, None)
(s, task) = self._get_next_task_from_state(s, host=host)
if not peek:
self._host_states[host.name] = s
display.debug("done getting next task for host %s" % host.name)
display.debug(" ^ task is: %s" % task)
display.debug(" ^ state is: %s" % s)
return (s, task)
def _get_next_task_from_state(self, state, host):
task = None
# try and find the next task, given the current state.
while True:
# try to get the current block from the list of blocks, and
# if we run past the end of the list we know we're done with
# this block
try:
block = state._blocks[state.cur_block]
except IndexError:
state.run_state = self.ITERATING_COMPLETE
return (state, None)
if state.run_state == self.ITERATING_SETUP:
# First, we check to see if we were pending setup. If not, this is
# the first trip through ITERATING_SETUP, so we set the pending_setup
# flag and try to determine if we do in fact want to gather facts for
# the specified host.
if not state.pending_setup:
state.pending_setup = True
# Gather facts if the default is 'smart' and we have not yet
# done it for this host; or if 'explicit' and the play sets
# gather_facts to True; or if 'implicit' and the play does
# NOT explicitly set gather_facts to False.
gathering = C.DEFAULT_GATHERING
implied = self._play.gather_facts is None or boolean(self._play.gather_facts, strict=False)
if (gathering == 'implicit' and implied) or \
(gathering == 'explicit' and boolean(self._play.gather_facts, strict=False)) or \
(gathering == 'smart' and implied and not (self._variable_manager._fact_cache.get(host.name, {}).get('_ansible_facts_gathered', False))):
# The setup block is always self._blocks[0], as we inject it
# during the play compilation in __init__ above.
setup_block = self._blocks[0]
if setup_block.has_tasks() and len(setup_block.block) > 0:
task = setup_block.block[0]
else:
# This is the second trip through ITERATING_SETUP, so we clear
# the flag and move onto the next block in the list while setting
# the run state to ITERATING_TASKS
state.pending_setup = False
state.run_state = self.ITERATING_TASKS
if not state.did_start_at_task:
state.cur_block += 1
state.cur_regular_task = 0
state.cur_rescue_task = 0
state.cur_always_task = 0
state.tasks_child_state = None
state.rescue_child_state = None
state.always_child_state = None
elif state.run_state == self.ITERATING_TASKS:
# clear the pending setup flag, since we're past that and it didn't fail
if state.pending_setup:
state.pending_setup = False
# First, we check for a child task state that is not failed, and if we
# have one recurse into it for the next task. If we're done with the child
# state, we clear it and drop back to getting the next task from the list.
if state.tasks_child_state:
(state.tasks_child_state, task) = self._get_next_task_from_state(state.tasks_child_state, host=host)
if self._check_failed_state(state.tasks_child_state):
# failed child state, so clear it and move into the rescue portion
state.tasks_child_state = None
self._set_failed_state(state)
else:
# get the next task recursively
if task is None or state.tasks_child_state.run_state == self.ITERATING_COMPLETE:
# we're done with the child state, so clear it and continue
# back to the top of the loop to get the next task
state.tasks_child_state = None
continue
else:
# First here, we check to see if we've failed anywhere down the chain
# of states we have, and if so we move onto the rescue portion. Otherwise,
# we check to see if we've moved past the end of the list of tasks. If so,
# we move into the always portion of the block, otherwise we get the next
# task from the list.
if self._check_failed_state(state):
state.run_state = self.ITERATING_RESCUE
elif state.cur_regular_task >= len(block.block):
state.run_state = self.ITERATING_ALWAYS
else:
task = block.block[state.cur_regular_task]
# if the current task is actually a child block, create a child
# state for us to recurse into on the next pass
if isinstance(task, Block):
state.tasks_child_state = HostState(blocks=[task])
state.tasks_child_state.run_state = self.ITERATING_TASKS
# since we've created the child state, clear the task
# so we can pick up the child state on the next pass
task = None
state.cur_regular_task += 1
elif state.run_state == self.ITERATING_RESCUE:
# The process here is identical to ITERATING_TASKS, except instead
# we move into the always portion of the block.
if host.name in self._play._removed_hosts:
self._play._removed_hosts.remove(host.name)
if state.rescue_child_state:
(state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, host=host)
if self._check_failed_state(state.rescue_child_state):
state.rescue_child_state = None
self._set_failed_state(state)
else:
if task is None or state.rescue_child_state.run_state == self.ITERATING_COMPLETE:
state.rescue_child_state = None
continue
else:
if state.fail_state & self.FAILED_RESCUE == self.FAILED_RESCUE:
state.run_state = self.ITERATING_ALWAYS
elif state.cur_rescue_task >= len(block.rescue):
if len(block.rescue) > 0:
state.fail_state = self.FAILED_NONE
state.run_state = self.ITERATING_ALWAYS
state.did_rescue = True
else:
task = block.rescue[state.cur_rescue_task]
if isinstance(task, Block):
state.rescue_child_state = HostState(blocks=[task])
state.rescue_child_state.run_state = self.ITERATING_TASKS
task = None
state.cur_rescue_task += 1
elif state.run_state == self.ITERATING_ALWAYS:
# And again, the process here is identical to ITERATING_TASKS, except
# instead we either move onto the next block in the list, or we set the
# run state to ITERATING_COMPLETE in the event of any errors, or when we
# have hit the end of the list of blocks.
if state.always_child_state:
(state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, host=host)
if self._check_failed_state(state.always_child_state):
state.always_child_state = None
self._set_failed_state(state)
else:
if task is None or state.always_child_state.run_state == self.ITERATING_COMPLETE:
state.always_child_state = None
continue
else:
if state.cur_always_task >= len(block.always):
if state.fail_state != self.FAILED_NONE:
state.run_state = self.ITERATING_COMPLETE
else:
state.cur_block += 1
state.cur_regular_task = 0
state.cur_rescue_task = 0
state.cur_always_task = 0
state.run_state = self.ITERATING_TASKS
state.tasks_child_state = None
state.rescue_child_state = None
state.always_child_state = None
state.did_rescue = False
else:
task = block.always[state.cur_always_task]
if isinstance(task, Block):
state.always_child_state = HostState(blocks=[task])
state.always_child_state.run_state = self.ITERATING_TASKS
task = None
state.cur_always_task += 1
elif state.run_state == self.ITERATING_COMPLETE:
return (state, None)
# if something above set the task, break out of the loop now
if task:
break
return (state, task)
def _set_failed_state(self, state):
if state.run_state == self.ITERATING_SETUP:
state.fail_state |= self.FAILED_SETUP
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_TASKS:
if state.tasks_child_state is not None:
state.tasks_child_state = self._set_failed_state(state.tasks_child_state)
else:
state.fail_state |= self.FAILED_TASKS
if state._blocks[state.cur_block].rescue:
state.run_state = self.ITERATING_RESCUE
elif state._blocks[state.cur_block].always:
state.run_state = self.ITERATING_ALWAYS
else:
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_RESCUE:
if state.rescue_child_state is not None:
state.rescue_child_state = self._set_failed_state(state.rescue_child_state)
else:
state.fail_state |= self.FAILED_RESCUE
if state._blocks[state.cur_block].always:
state.run_state = self.ITERATING_ALWAYS
else:
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_ALWAYS:
if state.always_child_state is not None:
state.always_child_state = self._set_failed_state(state.always_child_state)
else:
state.fail_state |= self.FAILED_ALWAYS
state.run_state = self.ITERATING_COMPLETE
return state
def mark_host_failed(self, host):
s = self.get_host_state(host)
display.debug("marking host %s failed, current state: %s" % (host, s))
s = self._set_failed_state(s)
display.debug("^ failed state is now: %s" % s)
self._host_states[host.name] = s
self._play._removed_hosts.append(host.name)
def get_failed_hosts(self):
return dict((host, True) for (host, state) in iteritems(self._host_states) if self._check_failed_state(state))
def _check_failed_state(self, state):
if state is None:
return False
elif state.run_state == self.ITERATING_RESCUE and self._check_failed_state(state.rescue_child_state):
return True
elif state.run_state == self.ITERATING_ALWAYS and self._check_failed_state(state.always_child_state):
return True
elif state.fail_state != self.FAILED_NONE:
if state.run_state == self.ITERATING_RESCUE and state.fail_state & self.FAILED_RESCUE == 0:
return False
elif state.run_state == self.ITERATING_ALWAYS and state.fail_state & self.FAILED_ALWAYS == 0:
return False
else:
return not (state.did_rescue and state.fail_state & self.FAILED_ALWAYS == 0)
elif state.run_state == self.ITERATING_TASKS and self._check_failed_state(state.tasks_child_state):
cur_block = state._blocks[state.cur_block]
if len(cur_block.rescue) > 0 and state.fail_state & self.FAILED_RESCUE == 0:
return False
else:
return True
return False
def is_failed(self, host):
s = self.get_host_state(host)
return self._check_failed_state(s)
def get_active_state(self, state):
'''
Finds the active state, recursively if necessary when there are child states.
'''
if state.run_state == self.ITERATING_TASKS and state.tasks_child_state is not None:
return self.get_active_state(state.tasks_child_state)
elif state.run_state == self.ITERATING_RESCUE and state.rescue_child_state is not None:
return self.get_active_state(state.rescue_child_state)
elif state.run_state == self.ITERATING_ALWAYS and state.always_child_state is not None:
return self.get_active_state(state.always_child_state)
return state
def is_any_block_rescuing(self, state):
'''
Given the current HostState state, determines if the current block, or any child blocks,
are in rescue mode.
'''
if state.run_state == self.ITERATING_RESCUE:
return True
if state.tasks_child_state is not None:
return self.is_any_block_rescuing(state.tasks_child_state)
return False
def get_original_task(self, host, task):
# now a noop because we've changed the way we do caching
return (None, None)
def _insert_tasks_into_state(self, state, task_list):
# if we've failed at all, or if the task list is empty, just return the current state
if state.fail_state != self.FAILED_NONE and state.run_state not in (self.ITERATING_RESCUE, self.ITERATING_ALWAYS) or not task_list:
return state
if state.run_state == self.ITERATING_TASKS:
if state.tasks_child_state:
state.tasks_child_state = self._insert_tasks_into_state(state.tasks_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.block[:state.cur_regular_task]
after = target_block.block[state.cur_regular_task:]
target_block.block = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == self.ITERATING_RESCUE:
if state.rescue_child_state:
state.rescue_child_state = self._insert_tasks_into_state(state.rescue_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.rescue[:state.cur_rescue_task]
after = target_block.rescue[state.cur_rescue_task:]
target_block.rescue = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == self.ITERATING_ALWAYS:
if state.always_child_state:
state.always_child_state = self._insert_tasks_into_state(state.always_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.always[:state.cur_always_task]
after = target_block.always[state.cur_always_task:]
target_block.always = before + task_list + after
state._blocks[state.cur_block] = target_block
return state
def add_tasks(self, host, task_list):
self._host_states[host.name] = self._insert_tasks_into_state(self.get_host_state(host), task_list)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
lib/ansible/executor/playbook_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible import context
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.module_utils._text import to_text
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.loader import become_loader, connection_loader, shell_loader
from ansible.playbook import Playbook
from ansible.template import Templar
from ansible.utils.helpers import pct_to_int
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.utils.path import makedirs_safe
from ansible.utils.ssh_functions import set_default_transport
from ansible.utils.display import Display
display = Display()
class PlaybookExecutor:
'''
This is the primary class for executing playbooks, and thus the
basis for bin/ansible-playbook operation.
'''
def __init__(self, playbooks, inventory, variable_manager, loader, passwords):
self._playbooks = playbooks
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self.passwords = passwords
self._unreachable_hosts = dict()
if context.CLIARGS.get('listhosts') or context.CLIARGS.get('listtasks') or \
context.CLIARGS.get('listtags') or context.CLIARGS.get('syntax'):
self._tqm = None
else:
self._tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords=self.passwords,
forks=context.CLIARGS.get('forks'),
)
# Note: We run this here to cache whether the default ansible ssh
# executable supports control persist. Sometime in the future we may
# need to enhance this to check that ansible_ssh_executable specified
# in inventory is also cached. We can't do this caching at the point
# where it is used (in task_executor) because that is post-fork and
# therefore would be discarded after every task.
set_default_transport()
def run(self):
'''
Run the given playbook, based on the settings in the play which
may limit the runs to serialized groups, etc.
'''
result = 0
entrylist = []
entry = {}
try:
# preload become/connection/shell to set config defs cached
list(connection_loader.all(class_only=True))
list(shell_loader.all(class_only=True))
list(become_loader.all(class_only=True))
for playbook in self._playbooks:
# deal with FQCN
resource = _get_collection_playbook_path(playbook)
if resource is not None:
playbook_path = resource[1]
playbook_collection = resource[2]
else:
playbook_path = playbook
# not fqcn, but might still be colleciotn playbook
playbook_collection = _get_collection_name_from_path(playbook)
if playbook_collection:
display.warning("running playbook inside collection {0}".format(playbook_collection))
AnsibleCollectionConfig.default_collection = playbook_collection
else:
AnsibleCollectionConfig.default_collection = None
pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader)
# FIXME: move out of inventory self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path)))
if self._tqm is None: # we are doing a listing
entry = {'playbook': playbook_path}
entry['plays'] = []
else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm.send_callback('v2_playbook_on_start', pb)
i = 1
plays = pb.get_plays()
display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path)))
for play in plays:
if play._included_path is not None:
self._loader.set_basedir(play._included_path)
else:
self._loader.set_basedir(pb._basedir)
# clear any filters which may have been applied to the inventory
self._inventory.remove_restriction()
# Allow variables to be used in vars_prompt fields.
all_vars = self._variable_manager.get_vars(play=play)
templar = Templar(loader=self._loader, variables=all_vars)
setattr(play, 'vars_prompt', templar.template(play.vars_prompt))
# FIXME: this should be a play 'sub object' like loop_control
if play.vars_prompt:
for var in play.vars_prompt:
vname = var['name']
prompt = var.get("prompt", vname)
default = var.get("default", None)
private = boolean(var.get("private", True))
confirm = boolean(var.get("confirm", False))
encrypt = var.get("encrypt", None)
salt_size = var.get("salt_size", None)
salt = var.get("salt", None)
unsafe = var.get("unsafe", None)
if vname not in self._variable_manager.extra_vars:
if self._tqm:
self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt,
default, unsafe)
play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe)
else: # we are either in --list-<option> or syntax check
play.vars[vname] = default
# Post validate so any play level variables are templated
all_vars = self._variable_manager.get_vars(play=play)
templar = Templar(loader=self._loader, variables=all_vars)
play.post_validate(templar)
if context.CLIARGS['syntax']:
continue
if self._tqm is None:
# we are just doing a listing
entry['plays'].append(play)
else:
self._tqm._unreachable_hosts.update(self._unreachable_hosts)
previously_failed = len(self._tqm._failed_hosts)
previously_unreachable = len(self._tqm._unreachable_hosts)
break_play = False
# we are actually running plays
batches = self._get_serialized_batches(play)
if len(batches) == 0:
self._tqm.send_callback('v2_playbook_on_play_start', play)
self._tqm.send_callback('v2_playbook_on_no_hosts_matched')
for batch in batches:
# restrict the inventory to the hosts in the serialized batch
self._inventory.restrict_to_hosts(batch)
# and run it...
result = self._tqm.run(play=play)
# break the play if the result equals the special return code
if result & self._tqm.RUN_FAILED_BREAK_PLAY != 0:
result = self._tqm.RUN_FAILED_HOSTS
break_play = True
# check the number of failures here, to see if they're above the maximum
# failure percentage allowed, or if any errors are fatal. If either of those
# conditions are met, we break out, otherwise we only break out if the entire
# batch failed
failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \
(previously_failed + previously_unreachable)
if len(batch) == failed_hosts_count:
break_play = True
break
# update the previous counts so they don't accumulate incorrectly
# over multiple serial batches
previously_failed += len(self._tqm._failed_hosts) - previously_failed
previously_unreachable += len(self._tqm._unreachable_hosts) - previously_unreachable
# save the unreachable hosts from this batch
self._unreachable_hosts.update(self._tqm._unreachable_hosts)
if break_play:
break
i = i + 1 # per play
if entry:
entrylist.append(entry) # per playbook
# send the stats callback for this playbook
if self._tqm is not None:
if C.RETRY_FILES_ENABLED:
retries = set(self._tqm._failed_hosts.keys())
retries.update(self._tqm._unreachable_hosts.keys())
retries = sorted(retries)
if len(retries) > 0:
if C.RETRY_FILES_SAVE_PATH:
basedir = C.RETRY_FILES_SAVE_PATH
elif playbook_path:
basedir = os.path.dirname(os.path.abspath(playbook_path))
else:
basedir = '~/'
(retry_name, _) = os.path.splitext(os.path.basename(playbook_path))
filename = os.path.join(basedir, "%s.retry" % retry_name)
if self._generate_retry_inventory(filename, retries):
display.display("\tto retry, use: --limit @%s\n" % filename)
self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)
# if the last result wasn't zero, break out of the playbook file name loop
if result != 0:
break
if entrylist:
return entrylist
finally:
if self._tqm is not None:
self._tqm.cleanup()
if self._loader:
self._loader.cleanup_all_tmp_files()
if context.CLIARGS['syntax']:
display.display("No issues encountered")
return result
if context.CLIARGS['start_at_task'] and not self._tqm._start_at_done:
display.error(
"No matching task \"%s\" found."
" Note: --start-at-task can only follow static includes."
% context.CLIARGS['start_at_task']
)
return result
def _get_serialized_batches(self, play):
'''
Returns a list of hosts, subdivided into batches based on
the serial size specified in the play.
'''
# make sure we have a unique list of hosts
all_hosts = self._inventory.get_hosts(play.hosts, order=play.order)
all_hosts_len = len(all_hosts)
# the serial value can be listed as a scalar or a list of
# scalars, so we make sure it's a list here
serial_batch_list = play.serial
if len(serial_batch_list) == 0:
serial_batch_list = [-1]
cur_item = 0
serialized_batches = []
while len(all_hosts) > 0:
# get the serial value from current item in the list
serial = pct_to_int(serial_batch_list[cur_item], all_hosts_len)
# if the serial count was not specified or is invalid, default to
# a list of all hosts, otherwise grab a chunk of the hosts equal
# to the current serial item size
if serial <= 0:
serialized_batches.append(all_hosts)
break
else:
play_hosts = []
for x in range(serial):
if len(all_hosts) > 0:
play_hosts.append(all_hosts.pop(0))
serialized_batches.append(play_hosts)
# increment the current batch list item number, and if we've hit
# the end keep using the last element until we've consumed all of
# the hosts in the inventory
cur_item += 1
if cur_item > len(serial_batch_list) - 1:
cur_item = len(serial_batch_list) - 1
return serialized_batches
def _generate_retry_inventory(self, retry_path, replay_hosts):
'''
Called when a playbook run fails. It generates an inventory which allows
re-running on ONLY the failed hosts. This may duplicate some variable
information in group_vars/host_vars but that is ok, and expected.
'''
try:
makedirs_safe(os.path.dirname(retry_path))
with open(retry_path, 'w') as fd:
for x in replay_hosts:
fd.write("%s\n" % x)
except Exception as e:
display.warning("Could not create retry file '%s'.\n\t%s" % (retry_path, to_text(e)))
return False
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
lib/ansible/executor/task_queue_manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
import tempfile
import threading
import time
import multiprocessing.queues
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.executor.play_iterator import PlayIterator
from ansible.executor.stats import AggregateStats
from ansible.executor.task_result import TaskResult
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils._text import to_text, to_native
from ansible.playbook.play_context import PlayContext
from ansible.playbook.task import Task
from ansible.plugins.loader import callback_loader, strategy_loader, module_loader
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.vars.hostvars import HostVars
from ansible.vars.reserved import warn_if_reserved
from ansible.utils.display import Display
from ansible.utils.lock import lock_decorator
from ansible.utils.multiprocessing import context as multiprocessing_context
__all__ = ['TaskQueueManager']
display = Display()
class CallbackSend:
def __init__(self, method_name, *args, **kwargs):
self.method_name = method_name
self.args = args
self.kwargs = kwargs
class FinalQueue(multiprocessing.queues.Queue):
def __init__(self, *args, **kwargs):
if PY3:
kwargs['ctx'] = multiprocessing_context
super(FinalQueue, self).__init__(*args, **kwargs)
def send_callback(self, method_name, *args, **kwargs):
self.put(
CallbackSend(method_name, *args, **kwargs),
block=False
)
def send_task_result(self, *args, **kwargs):
if isinstance(args[0], TaskResult):
tr = args[0]
else:
tr = TaskResult(*args, **kwargs)
self.put(
tr,
block=False
)
class TaskQueueManager:
'''
This class handles the multiprocessing requirements of Ansible by
creating a pool of worker forks, a result handler fork, and a
manager object with shared datastructures/queues for coordinating
work between all processes.
The queue manager is responsible for loading the play strategy plugin,
which dispatches the Play's tasks to hosts.
'''
RUN_OK = 0
RUN_ERROR = 1
RUN_FAILED_HOSTS = 2
RUN_UNREACHABLE_HOSTS = 4
RUN_FAILED_BREAK_PLAY = 8
RUN_UNKNOWN_ERROR = 255
def __init__(self, inventory, variable_manager, loader, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False, forks=None):
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self._stats = AggregateStats()
self.passwords = passwords
self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._forks = forks or 5
self._callbacks_loaded = False
self._callback_plugins = []
self._start_at_done = False
# make sure any module paths (if specified) are added to the module_loader
if context.CLIARGS.get('module_path', False):
for path in context.CLIARGS['module_path']:
if path:
module_loader.add_directory(path)
# a special flag to help us exit cleanly
self._terminated = False
# dictionaries to keep track of failed/unreachable hosts
self._failed_hosts = dict()
self._unreachable_hosts = dict()
try:
self._final_q = FinalQueue()
except OSError as e:
raise AnsibleError("Unable to use multiprocessing, this is normally caused by lack of access to /dev/shm: %s" % to_native(e))
self._callback_lock = threading.Lock()
# A temporary file (opened pre-fork) used by connection
# plugins for inter-process locking.
self._connection_lockfile = tempfile.TemporaryFile()
def _initialize_processes(self, num):
self._workers = []
for i in range(num):
self._workers.append(None)
def load_callbacks(self):
'''
Loads all available callbacks, with the exception of those which
utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',
only one such callback plugin will be loaded.
'''
if self._callbacks_loaded:
return
stdout_callback_loaded = False
if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if isinstance(self._stdout_callback, CallbackBase):
stdout_callback_loaded = True
elif isinstance(self._stdout_callback, string_types):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
self._stdout_callback = callback_loader.get(self._stdout_callback)
self._stdout_callback.set_options()
stdout_callback_loaded = True
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
# get all configured loadable callbacks (adjacent, builtin)
callback_list = list(callback_loader.all(class_only=True))
# add enabled callbacks that refer to collections, which might not appear in normal listing
for c in C.CALLBACKS_ENABLED:
# load all, as collection ones might be using short/redirected names and not a fqcn
plugin = callback_loader.get(c, class_only=True)
# TODO: check if this skip is redundant, loader should handle bad file/plugin cases already
if plugin:
# avoids incorrect and dupes possible due to collections
if plugin not in callback_list:
callback_list.append(plugin)
else:
display.warning("Skipping callback plugin '%s', unable to load" % c)
# for each callback in the list see if we should add it to 'active callbacks' used in the play
for callback_plugin in callback_list:
callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', '')
callback_needs_enabled = getattr(callback_plugin, 'CALLBACK_NEEDS_ENABLED', getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False))
# try to get colleciotn world name first
cnames = getattr(callback_plugin, '_redirected_names', [])
if cnames:
# store the name the plugin was loaded as, as that's what we'll need to compare to the configured callback list later
callback_name = cnames[0]
else:
# fallback to 'old loader name'
(callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))
display.vvvvv("Attempting to use '%s' callback." % (callback_name))
if callback_type == 'stdout':
# we only allow one callback of type 'stdout' to be loaded,
if callback_name != self._stdout_callback or stdout_callback_loaded:
display.vv("Skipping callback '%s', as we already have a stdout callback." % (callback_name))
continue
stdout_callback_loaded = True
elif callback_name == 'tree' and self._run_tree:
# TODO: remove special case for tree, which is an adhoc cli option --tree
pass
elif not self._run_additional_callbacks or (callback_needs_enabled and (
# only run if not adhoc, or adhoc was specifically configured to run + check enabled list
C.CALLBACKS_ENABLED is None or callback_name not in C.CALLBACKS_ENABLED)):
# 2.x plugins shipped with ansible should require enabling, older or non shipped should load automatically
continue
try:
callback_obj = callback_plugin()
# avoid bad plugin not returning an object, only needed cause we do class_only load and bypass loader checks,
# really a bug in the plugin itself which we ignore as callback errors are not supposed to be fatal.
if callback_obj:
# skip initializing if we already did the work for the same plugin (even with diff names)
if callback_obj not in self._callback_plugins:
callback_obj.set_options()
self._callback_plugins.append(callback_obj)
else:
display.vv("Skipping callback '%s', already loaded as '%s'." % (callback_plugin, callback_name))
else:
display.warning("Skipping callback '%s', as it does not create a valid plugin instance." % callback_name)
continue
except Exception as e:
display.warning("Skipping callback '%s', unable to load due to: %s" % (callback_name, to_native(e)))
continue
self._callbacks_loaded = True
def run(self, play):
'''
Iterates over the roles/tasks in a play, using the given (or default)
strategy for queueing tasks. The default is the linear strategy, which
operates like classic Ansible by keeping all hosts in lock-step with
a given task (meaning no hosts move on to the next task until all hosts
are done with the current task).
'''
if not self._callbacks_loaded:
self.load_callbacks()
all_vars = self._variable_manager.get_vars(play=play)
templar = Templar(loader=self._loader, variables=all_vars)
warn_if_reserved(all_vars, templar.environment.globals.keys())
new_play = play.copy()
new_play.post_validate(templar)
new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
play_context = PlayContext(new_play, self.passwords, self._connection_lockfile.fileno())
if (self._stdout_callback and
hasattr(self._stdout_callback, 'set_play_context')):
self._stdout_callback.set_play_context(play_context)
for callback_plugin in self._callback_plugins:
if hasattr(callback_plugin, 'set_play_context'):
callback_plugin.set_play_context(play_context)
self.send_callback('v2_playbook_on_play_start', new_play)
# build the iterator
iterator = PlayIterator(
inventory=self._inventory,
play=new_play,
play_context=play_context,
variable_manager=self._variable_manager,
all_vars=all_vars,
start_at_done=self._start_at_done,
)
# adjust to # of workers to configured forks or size of batch, whatever is lower
self._initialize_processes(min(self._forks, iterator.batch_size))
# load the specified strategy (or the default linear one)
strategy = strategy_loader.get(new_play.strategy, self)
if strategy is None:
raise AnsibleError("Invalid play strategy specified: %s" % new_play.strategy, obj=play._ds)
# Because the TQM may survive multiple play runs, we start by marking
# any hosts as failed in the iterator here which may have been marked
# as failed in previous runs. Then we clear the internal list of failed
# hosts so we know what failed this round.
for host_name in self._failed_hosts.keys():
host = self._inventory.get_host(host_name)
iterator.mark_host_failed(host)
for host_name in self._unreachable_hosts.keys():
iterator._play._removed_hosts.append(host_name)
self.clear_failed_hosts()
# during initialization, the PlayContext will clear the start_at_task
# field to signal that a matching task was found, so check that here
# and remember it so we don't try to skip tasks on future plays
if context.CLIARGS.get('start_at_task') is not None and play_context.start_at_task is None:
self._start_at_done = True
# and run the play using the strategy and cleanup on way out
try:
play_return = strategy.run(iterator, play_context)
finally:
strategy.cleanup()
self._cleanup_processes()
# now re-save the hosts that failed from the iterator to our internal list
for host_name in iterator.get_failed_hosts():
self._failed_hosts[host_name] = True
return play_return
def cleanup(self):
display.debug("RUNNING CLEANUP")
self.terminate()
self._final_q.close()
self._cleanup_processes()
# A bug exists in Python 2.6 that causes an exception to be raised during
# interpreter shutdown. This is only an issue in our CI testing but we
# hit it frequently enough to add a small sleep to avoid the issue.
# This can be removed once we have split controller available in CI.
#
# Further information:
# Issue: https://bugs.python.org/issue4106
# Fix: https://hg.python.org/cpython/rev/d316315a8781
#
try:
if (2, 6) == (sys.version_info[0:2]):
time.sleep(0.0001)
except (IndexError, AttributeError):
# In case there is an issue getting the version info, don't raise an Exception
pass
def _cleanup_processes(self):
if hasattr(self, '_workers'):
for attempts_remaining in range(C.WORKER_SHUTDOWN_POLL_COUNT - 1, -1, -1):
if not any(worker_prc and worker_prc.is_alive() for worker_prc in self._workers):
break
if attempts_remaining:
time.sleep(C.WORKER_SHUTDOWN_POLL_DELAY)
else:
display.warning('One or more worker processes are still running and will be terminated.')
for worker_prc in self._workers:
if worker_prc and worker_prc.is_alive():
try:
worker_prc.terminate()
except AttributeError:
pass
def clear_failed_hosts(self):
self._failed_hosts = dict()
def get_inventory(self):
return self._inventory
def get_variable_manager(self):
return self._variable_manager
def get_loader(self):
return self._loader
def get_workers(self):
return self._workers[:]
def terminate(self):
self._terminated = True
def has_dead_workers(self):
# [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,
# <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>
defunct = False
for x in self._workers:
if getattr(x, 'exitcode', None):
defunct = True
return defunct
@lock_decorator(attr='_callback_lock')
def send_callback(self, method_name, *args, **kwargs):
for callback_plugin in [self._stdout_callback] + self._callback_plugins:
# a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin
if getattr(callback_plugin, 'disabled', False):
continue
# a plugin can opt in to implicit tasks (such as meta). It does this
# by declaring self.wants_implicit_tasks = True.
wants_implicit_tasks = getattr(callback_plugin, 'wants_implicit_tasks', False)
# try to find v2 method, fallback to v1 method, ignore callback if no method found
methods = []
for possible in [method_name, 'v2_on_any']:
gotit = getattr(callback_plugin, possible, None)
if gotit is None:
gotit = getattr(callback_plugin, possible.replace('v2_', ''), None)
if gotit is not None:
methods.append(gotit)
# send clean copies
new_args = []
# If we end up being given an implicit task, we'll set this flag in
# the loop below. If the plugin doesn't care about those, then we
# check and continue to the next iteration of the outer loop.
is_implicit_task = False
for arg in args:
# FIXME: add play/task cleaners
if isinstance(arg, TaskResult):
new_args.append(arg.clean_copy())
# elif isinstance(arg, Play):
# elif isinstance(arg, Task):
else:
new_args.append(arg)
if isinstance(arg, Task) and arg.implicit:
is_implicit_task = True
if is_implicit_task and not wants_implicit_tasks:
continue
for method in methods:
try:
method(*new_args, **kwargs)
except Exception as e:
# TODO: add config toggle to make this fatal or not?
display.warning(u"Failure using method (%s) in callback plugin (%s): %s" % (to_text(method_name), to_text(callback_plugin), to_text(e)))
from traceback import format_tb
from sys import exc_info
display.vvv('Callback Exception: \n' + ' '.join(format_tb(exc_info()[2])))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
test/integration/targets/meta_tasks/runme.sh
|
#!/usr/bin/env bash
set -eux
# test end_host meta task, with when conditional
for test_strategy in linear free; do
out="$(ansible-playbook test_end_host.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: end_host conditional evaluated to false, continuing execution for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -q '"skip_reason": "end_host conditional evaluated to False, continuing execution for testhost"' <<< "$out"
grep -q "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
out="$(ansible-playbook test_end_host_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: end_host conditional evaluated to false, continuing execution for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -q '"skip_reason": "end_host conditional evaluated to False, continuing execution for testhost"' <<< "$out"
grep -q "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
done
# test end_host meta task, on all hosts
for test_strategy in linear free; do
out="$(ansible-playbook test_end_host_all.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -qv "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
out="$(ansible-playbook test_end_host_all_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -qv "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
done
# test end_play meta task
for test_strategy in linear free; do
out="$(ansible-playbook test_end_play.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
out="$(ansible-playbook test_end_play_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,971 |
meta: end_play doesnt work with serial: 1
|
### Summary
I want to do an action only once on a cluster, I execute a playbook with three hosts and serial :1.
The first host execute the action with meta: end_play at the end. The playbook continue for other hosts but I see meta: end play with verbose -vv iin runtime.
```yaml
- name: Backup postgresql data and push to aws with barman-cloud
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
This is the end of the task I want to execute on one host
```yaml
- name: Remove file config file
become_user: root
file:
path: "{{ item }}"
state: absent
with_items:
- /tmp/config
- /tmp/credentials
- /etc/barman.conf
- /etc/barman.d/
- name: Backup succeeded - End play
meta: end_play
```
```
TASK [Remove file config file] ***************************************************************
changed: [int_postgres1] => (item=/tmp/config) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/config", "path": "/tmp/config", "state": "absent"}
changed: [int_postgres1] => (item=/tmp/credentials) => {"ansible_loop_var": "item", "changed": true, "item": "/tmp/credentials", "path": "/tmp/credentials", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.conf) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.conf", "path": "/etc/barman.conf", "state": "absent"}
changed: [int_postgres1] => (item=/etc/barman.d/) => {"ansible_loop_var": "item", "changed": true, "item": "/etc/barman.d/", "path": "/etc/barman.d/", "state": "absent"}
META: ending play
PLAY [Backup postgresql data and push to aws with barman-cloud] ******************************
Friday 19 March 2021 20:20:03 +0100 (0:00:01.171) 0:00:18.459 **********
TASK [Gathering Facts] ***********************************************************************
ok: [int_postgres2]
META: ran handlers
Friday 19 March 2021 20:20:04 +0100 (0:00:01.490) 0:00:19.950 **********
Friday 19 March 2021 20:20:05 +0100 (0:00:00.059) 0:00:20.010 **********
TASK [Check if postgres server is replica] ***************************************************
changed: [int_postgres2] => {"changed": true, "cmd": "curl -s http://***:8008/patroni | jq -r .role", "delta": "0:00:00.017332", "end": "2021-03-19 20:20:05.290983", "rc": 0, "start": "2021-03-19 20:20:05.273651", "stderr": "", "stderr_lines": [], "stdout": "replica", "stdout_lines": ["replica"]}
META: end_host conditional evaluated to false, continuing execution for int_postgres2
Friday 19 March 2021 20:20:05 +0100 (0:00:00.344) 0:00:20.355 **********
```
EDIT: Also run_once doesnt work with serial: 1, all hosts will execute the command in the same playbook.
I removed serial: 1 and add run_once on all commands I wanted to execute only once but I keep this issue because I'm not sure this is the expected behaviour for meta and serial: 1.
Tell me if I am wrong.
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.3
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
hosts: "{{ target }}"
become: true
become_user: postgres
serial: 1
any_errors_fatal: true
gather_facts: true
```
### Expected Results
end_play meta stop all hosts even if serial: 1 is specified
### Actual Results
```console (paste below)
playbook continu and doesnt stop with meta end_play
```
|
https://github.com/ansible/ansible/issues/73971
|
https://github.com/ansible/ansible/pull/74332
|
fe20546d36d30e50d6a614ed394c861f50190d46
|
e201b542be23bccd2418eab661cdf5454af3bea8
| 2021-03-19T19:37:10Z |
python
| 2021-06-03T07:26:22Z |
test/integration/targets/meta_tasks/test_end_play_serial_one.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,866 |
When gathering facts, Ansible fails to detect runit service manager if unable to read /proc/1/comm
|
### Summary
One way this can happen is when Ansible runs as a non-priviledged user and ``/proc`` is mounted with ``hidepid=invisible``.
In this situation, ``ansible_service_mgr`` will be set to ``service`` as a fallback value.
Since the ``service`` module has no built-in support for ``runit``, it then fails to work correctly.
### Issue Type
Bug Report
### Component Name
module_utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.5 (default, May 5 2021, 14:50:57) [GCC 10.2.1 20201203]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
- Void Linux x86_64
### Steps to Reproduce
```
# Remount /proc so that normal user are not allowed to see process details of processes belonging to other users:
sudo mount -o remount,hidepid=2 /proc
# Check service_mgr fact gathered by Ansible:
ansible localhost -m setup|grep service_mgr
"ansible_service_mgr": "service",
```
### Expected Results
The expected output would be:
```
"ansible_service_mgr": "runit",
```
### Actual Results
The ``service`` fallback value is used:
```
"ansible_service_mgr": "service",
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74866
|
https://github.com/ansible/ansible/pull/74867
|
9c718ccc4288473d44ebf13599a409913d43e3cc
|
b023f34f4a31c91ff842bae54174c97ae03a57af
| 2021-05-31T11:27:36Z |
python
| 2021-06-03T13:59:31Z |
changelogs/fragments/74867-service_mgr_runit_detection_fallback.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,866 |
When gathering facts, Ansible fails to detect runit service manager if unable to read /proc/1/comm
|
### Summary
One way this can happen is when Ansible runs as a non-priviledged user and ``/proc`` is mounted with ``hidepid=invisible``.
In this situation, ``ansible_service_mgr`` will be set to ``service`` as a fallback value.
Since the ``service`` module has no built-in support for ``runit``, it then fails to work correctly.
### Issue Type
Bug Report
### Component Name
module_utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.5 (default, May 5 2021, 14:50:57) [GCC 10.2.1 20201203]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
- Void Linux x86_64
### Steps to Reproduce
```
# Remount /proc so that normal user are not allowed to see process details of processes belonging to other users:
sudo mount -o remount,hidepid=2 /proc
# Check service_mgr fact gathered by Ansible:
ansible localhost -m setup|grep service_mgr
"ansible_service_mgr": "service",
```
### Expected Results
The expected output would be:
```
"ansible_service_mgr": "runit",
```
### Actual Results
The ``service`` fallback value is used:
```
"ansible_service_mgr": "service",
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74866
|
https://github.com/ansible/ansible/pull/74867
|
9c718ccc4288473d44ebf13599a409913d43e3cc
|
b023f34f4a31c91ff842bae54174c97ae03a57af
| 2021-05-31T11:27:36Z |
python
| 2021-06-03T13:59:31Z |
lib/ansible/module_utils/facts/system/service_mgr.py
|
# Collect facts related to system service manager and init.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
from ansible.module_utils._text import to_native
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
# The distutils module is not shipped with SUNWPython on Solaris.
# It's in the SUNWPython-devel package which also contains development files
# that don't belong on production boxes. Since our Solaris code doesn't
# depend on LooseVersion, do not import it on Solaris.
if platform.system() != 'SunOS':
from ansible.module_utils.compat.version import LooseVersion
class ServiceMgrFactCollector(BaseFactCollector):
name = 'service_mgr'
_fact_ids = set()
required_facts = set(['platform', 'distribution'])
@staticmethod
def is_systemd_managed(module):
# tools must be installed
if module.get_bin_path('systemctl'):
# this should show if systemd is the boot init system, if checking init faild to mark as systemd
# these mirror systemd's own sd_boot test http://www.freedesktop.org/software/systemd/man/sd_booted.html
for canary in ["/run/systemd/system/", "/dev/.run/systemd/", "/dev/.systemd/"]:
if os.path.exists(canary):
return True
return False
@staticmethod
def is_systemd_managed_offline(module):
# tools must be installed
if module.get_bin_path('systemctl'):
# check if /sbin/init is a symlink to systemd
# on SUSE, /sbin/init may be missing if systemd-sysvinit package is not installed.
if os.path.islink('/sbin/init') and os.path.basename(os.readlink('/sbin/init')) == 'systemd':
return True
return False
def collect(self, module=None, collected_facts=None):
facts_dict = {}
if not module:
return facts_dict
collected_facts = collected_facts or {}
service_mgr_name = None
# TODO: detect more custom init setups like bootscripts, dmd, s6, Epoch, etc
# also other OSs other than linux might need to check across several possible candidates
# Mapping of proc_1 values to more useful names
proc_1_map = {
'procd': 'openwrt_init',
'runit-init': 'runit',
'svscan': 'svc',
'openrc-init': 'openrc',
}
# try various forms of querying pid 1
proc_1 = get_file_content('/proc/1/comm')
if proc_1 is None:
# FIXME: return code isnt checked
# FIXME: if stdout is empty string, odd things
# FIXME: other code seems to think we could get proc_1 == None past this point
rc, proc_1, err = module.run_command("ps -p 1 -o comm|tail -n 1", use_unsafe_shell=True)
# If the output of the command starts with what looks like a PID, then the 'ps' command
# probably didn't work the way we wanted, probably because it's busybox
if re.match(r' *[0-9]+ ', proc_1):
proc_1 = None
# The ps command above may return "COMMAND" if the user cannot read /proc, e.g. with grsecurity
if proc_1 == "COMMAND\n":
proc_1 = None
# FIXME: empty string proc_1 staus empty string
if proc_1 is not None:
proc_1 = os.path.basename(proc_1)
proc_1 = to_native(proc_1)
proc_1 = proc_1.strip()
if proc_1 is not None and (proc_1 == 'init' or proc_1.endswith('sh')):
# many systems return init, so this cannot be trusted, if it ends in 'sh' it probalby is a shell in a container
proc_1 = None
# if not init/None it should be an identifiable or custom init, so we are done!
if proc_1 is not None:
# Lookup proc_1 value in map and use proc_1 value itself if no match
# FIXME: empty string still falls through
service_mgr_name = proc_1_map.get(proc_1, proc_1)
# FIXME: replace with a system->service_mgr_name map?
# start with the easy ones
elif collected_facts.get('ansible_distribution', None) == 'MacOSX':
# FIXME: find way to query executable, version matching is not ideal
if LooseVersion(platform.mac_ver()[0]) >= LooseVersion('10.4'):
service_mgr_name = 'launchd'
else:
service_mgr_name = 'systemstarter'
elif 'BSD' in collected_facts.get('ansible_system', '') or collected_facts.get('ansible_system') in ['Bitrig', 'DragonFly']:
# FIXME: we might want to break out to individual BSDs or 'rc'
service_mgr_name = 'bsdinit'
elif collected_facts.get('ansible_system') == 'AIX':
service_mgr_name = 'src'
elif collected_facts.get('ansible_system') == 'SunOS':
service_mgr_name = 'smf'
elif collected_facts.get('ansible_distribution') == 'OpenWrt':
service_mgr_name = 'openwrt_init'
elif collected_facts.get('ansible_system') == 'Linux':
# FIXME: mv is_systemd_managed
if self.is_systemd_managed(module=module):
service_mgr_name = 'systemd'
elif module.get_bin_path('initctl') and os.path.exists("/etc/init/"):
service_mgr_name = 'upstart'
elif os.path.exists('/sbin/openrc'):
service_mgr_name = 'openrc'
elif self.is_systemd_managed_offline(module=module):
service_mgr_name = 'systemd'
elif os.path.exists('/etc/init.d/'):
service_mgr_name = 'sysvinit'
if not service_mgr_name:
# if we cannot detect, fallback to generic 'service'
service_mgr_name = 'service'
facts_dict['service_mgr'] = service_mgr_name
return facts_dict
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,866 |
When gathering facts, Ansible fails to detect runit service manager if unable to read /proc/1/comm
|
### Summary
One way this can happen is when Ansible runs as a non-priviledged user and ``/proc`` is mounted with ``hidepid=invisible``.
In this situation, ``ansible_service_mgr`` will be set to ``service`` as a fallback value.
Since the ``service`` module has no built-in support for ``runit``, it then fails to work correctly.
### Issue Type
Bug Report
### Component Name
module_utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.0]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.5 (default, May 5 2021, 14:50:57) [GCC 10.2.1 20201203]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
- Void Linux x86_64
### Steps to Reproduce
```
# Remount /proc so that normal user are not allowed to see process details of processes belonging to other users:
sudo mount -o remount,hidepid=2 /proc
# Check service_mgr fact gathered by Ansible:
ansible localhost -m setup|grep service_mgr
"ansible_service_mgr": "service",
```
### Expected Results
The expected output would be:
```
"ansible_service_mgr": "runit",
```
### Actual Results
The ``service`` fallback value is used:
```
"ansible_service_mgr": "service",
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74866
|
https://github.com/ansible/ansible/pull/74867
|
9c718ccc4288473d44ebf13599a409913d43e3cc
|
b023f34f4a31c91ff842bae54174c97ae03a57af
| 2021-05-31T11:27:36Z |
python
| 2021-06-03T13:59:31Z |
test/units/module_utils/facts/test_collectors.py
|
# unit tests for ansible fact collectors
# -*- coding: utf-8 -*-
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import Mock, patch
from . base import BaseFactsTest
from ansible.module_utils.facts import collector
from ansible.module_utils.facts.system.apparmor import ApparmorFactCollector
from ansible.module_utils.facts.system.caps import SystemCapabilitiesFactCollector
from ansible.module_utils.facts.system.cmdline import CmdLineFactCollector
from ansible.module_utils.facts.system.distribution import DistributionFactCollector
from ansible.module_utils.facts.system.dns import DnsFactCollector
from ansible.module_utils.facts.system.env import EnvFactCollector
from ansible.module_utils.facts.system.fips import FipsFactCollector
from ansible.module_utils.facts.system.pkg_mgr import PkgMgrFactCollector, OpenBSDPkgMgrFactCollector
from ansible.module_utils.facts.system.platform import PlatformFactCollector
from ansible.module_utils.facts.system.python import PythonFactCollector
from ansible.module_utils.facts.system.selinux import SelinuxFactCollector
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.system.ssh_pub_keys import SshPubKeyFactCollector
from ansible.module_utils.facts.system.user import UserFactCollector
from ansible.module_utils.facts.virtual.base import VirtualCollector
from ansible.module_utils.facts.network.base import NetworkCollector
from ansible.module_utils.facts.hardware.base import HardwareCollector
class CollectorException(Exception):
pass
class ExceptionThrowingCollector(collector.BaseFactCollector):
name = 'exc_throwing'
def __init__(self, collectors=None, namespace=None, exception=None):
super(ExceptionThrowingCollector, self).__init__(collectors, namespace)
self._exception = exception or CollectorException('collection failed')
def collect(self, module=None, collected_facts=None):
raise self._exception
class TestExceptionThrowingCollector(BaseFactsTest):
__test__ = True
gather_subset = ['exc_throwing']
valid_subsets = ['exc_throwing']
collector_class = ExceptionThrowingCollector
def test_collect(self):
module = self._mock_module()
fact_collector = self.collector_class()
self.assertRaises(CollectorException,
fact_collector.collect,
module=module,
collected_facts=self.collected_facts)
def test_collect_with_namespace(self):
module = self._mock_module()
fact_collector = self.collector_class()
self.assertRaises(CollectorException,
fact_collector.collect_with_namespace,
module=module,
collected_facts=self.collected_facts)
class TestApparmorFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'apparmor']
valid_subsets = ['apparmor']
fact_namespace = 'ansible_apparmor'
collector_class = ApparmorFactCollector
def test_collect(self):
facts_dict = super(TestApparmorFacts, self).test_collect()
self.assertIn('status', facts_dict['apparmor'])
class TestCapsFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'caps']
valid_subsets = ['caps']
fact_namespace = 'ansible_system_capabilities'
collector_class = SystemCapabilitiesFactCollector
def _mock_module(self):
mock_module = Mock()
mock_module.params = {'gather_subset': self.gather_subset,
'gather_timeout': 10,
'filter': '*'}
mock_module.get_bin_path = Mock(return_value='/usr/sbin/capsh')
mock_module.run_command = Mock(return_value=(0, 'Current: =ep', ''))
return mock_module
class TestCmdLineFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'cmdline']
valid_subsets = ['cmdline']
fact_namespace = 'ansible_cmdline'
collector_class = CmdLineFactCollector
def test_parse_proc_cmdline_uefi(self):
uefi_cmdline = r'initrd=\70ef65e1a04a47aea04f7b5145ea3537\4.10.0-19-generic\initrd root=UUID=50973b75-4a66-4bf0-9764-2b7614489e64 ro quiet'
expected = {'initrd': r'\70ef65e1a04a47aea04f7b5145ea3537\4.10.0-19-generic\initrd',
'root': 'UUID=50973b75-4a66-4bf0-9764-2b7614489e64',
'quiet': True,
'ro': True}
fact_collector = self.collector_class()
facts_dict = fact_collector._parse_proc_cmdline(uefi_cmdline)
self.assertDictEqual(facts_dict, expected)
def test_parse_proc_cmdline_fedora(self):
cmdline_fedora = r'BOOT_IMAGE=/vmlinuz-4.10.16-200.fc25.x86_64 root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/root rd.luks.uuid=luks-c80b7537-358b-4a07-b88c-c59ef187479b rd.lvm.lv=fedora/swap rhgb quiet LANG=en_US.UTF-8' # noqa
expected = {'BOOT_IMAGE': '/vmlinuz-4.10.16-200.fc25.x86_64',
'LANG': 'en_US.UTF-8',
'quiet': True,
'rd.luks.uuid': 'luks-c80b7537-358b-4a07-b88c-c59ef187479b',
'rd.lvm.lv': 'fedora/swap',
'rhgb': True,
'ro': True,
'root': '/dev/mapper/fedora-root'}
fact_collector = self.collector_class()
facts_dict = fact_collector._parse_proc_cmdline(cmdline_fedora)
self.assertDictEqual(facts_dict, expected)
def test_parse_proc_cmdline_dup_console(self):
example = r'BOOT_IMAGE=/boot/vmlinuz-4.4.0-72-generic root=UUID=e12e46d9-06c9-4a64-a7b3-60e24b062d90 ro console=tty1 console=ttyS0'
# FIXME: Two 'console' keywords? Using a dict for the fact value here loses info. Currently the 'last' one wins
expected = {'BOOT_IMAGE': '/boot/vmlinuz-4.4.0-72-generic',
'root': 'UUID=e12e46d9-06c9-4a64-a7b3-60e24b062d90',
'ro': True,
'console': 'ttyS0'}
fact_collector = self.collector_class()
facts_dict = fact_collector._parse_proc_cmdline(example)
# TODO: fails because we lose a 'console'
self.assertDictEqual(facts_dict, expected)
class TestDistributionFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'distribution']
valid_subsets = ['distribution']
fact_namespace = 'ansible_distribution'
collector_class = DistributionFactCollector
class TestDnsFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'dns']
valid_subsets = ['dns']
fact_namespace = 'ansible_dns'
collector_class = DnsFactCollector
class TestEnvFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'env']
valid_subsets = ['env']
fact_namespace = 'ansible_env'
collector_class = EnvFactCollector
def test_collect(self):
facts_dict = super(TestEnvFacts, self).test_collect()
self.assertIn('HOME', facts_dict['env'])
class TestFipsFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'fips']
valid_subsets = ['fips']
fact_namespace = 'ansible_fips'
collector_class = FipsFactCollector
class TestHardwareCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'hardware']
valid_subsets = ['hardware']
fact_namespace = 'ansible_hardware'
collector_class = HardwareCollector
collected_facts = {'ansible_architecture': 'x86_64'}
class TestNetworkCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'network']
valid_subsets = ['network']
fact_namespace = 'ansible_network'
collector_class = NetworkCollector
class TestPkgMgrFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'pkg_mgr']
valid_subsets = ['pkg_mgr']
fact_namespace = 'ansible_pkgmgr'
collector_class = PkgMgrFactCollector
collected_facts = {
"ansible_distribution": "Fedora",
"ansible_distribution_major_version": "28",
"ansible_os_family": "RedHat"
}
def test_collect(self):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
class TestMacOSXPkgMgrFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'pkg_mgr']
valid_subsets = ['pkg_mgr']
fact_namespace = 'ansible_pkgmgr'
collector_class = PkgMgrFactCollector
collected_facts = {
"ansible_distribution": "MacOSX",
"ansible_distribution_major_version": "11",
"ansible_os_family": "Darwin"
}
@patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/opt/homebrew/bin/brew')
def test_collect_opt_homebrew(self, p_exists):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
self.assertEqual(facts_dict['pkg_mgr'], 'homebrew')
@patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/usr/local/bin/brew')
def test_collect_usr_homebrew(self, p_exists):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
self.assertEqual(facts_dict['pkg_mgr'], 'homebrew')
@patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=lambda x: x == '/opt/local/bin/port')
def test_collect_macports(self, p_exists):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
self.assertEqual(facts_dict['pkg_mgr'], 'macports')
def _sanitize_os_path_apt_get(path):
if path == '/usr/bin/apt-get':
return True
else:
return False
class TestPkgMgrFactsAptFedora(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'pkg_mgr']
valid_subsets = ['pkg_mgr']
fact_namespace = 'ansible_pkgmgr'
collector_class = PkgMgrFactCollector
collected_facts = {
"ansible_distribution": "Fedora",
"ansible_distribution_major_version": "28",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "apt"
}
@patch('ansible.module_utils.facts.system.pkg_mgr.os.path.exists', side_effect=_sanitize_os_path_apt_get)
def test_collect(self, mock_os_path_exists):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
class TestOpenBSDPkgMgrFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'pkg_mgr']
valid_subsets = ['pkg_mgr']
fact_namespace = 'ansible_pkgmgr'
collector_class = OpenBSDPkgMgrFactCollector
def test_collect(self):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module, collected_facts=self.collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertIn('pkg_mgr', facts_dict)
self.assertEqual(facts_dict['pkg_mgr'], 'openbsd_pkg')
class TestPlatformFactCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'platform']
valid_subsets = ['platform']
fact_namespace = 'ansible_platform'
collector_class = PlatformFactCollector
class TestPythonFactCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'python']
valid_subsets = ['python']
fact_namespace = 'ansible_python'
collector_class = PythonFactCollector
class TestSelinuxFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'selinux']
valid_subsets = ['selinux']
fact_namespace = 'ansible_selinux'
collector_class = SelinuxFactCollector
def test_no_selinux(self):
with patch('ansible.module_utils.facts.system.selinux.HAVE_SELINUX', False):
module = self._mock_module()
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module)
self.assertIsInstance(facts_dict, dict)
self.assertEqual(facts_dict['selinux']['status'], 'Missing selinux Python library')
return facts_dict
class TestServiceMgrFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'service_mgr']
valid_subsets = ['service_mgr']
fact_namespace = 'ansible_service_mgr'
collector_class = ServiceMgrFactCollector
# TODO: dedupe some of this test code
@patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
def test_no_proc1(self, mock_gfc):
# no /proc/1/comm, ps returns non-0
# should fallback to 'service'
module = self._mock_module()
module.run_command = Mock(return_value=(1, '', 'wat'))
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module)
self.assertIsInstance(facts_dict, dict)
self.assertEqual(facts_dict['service_mgr'], 'service')
@patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
def test_no_proc1_ps_random_init(self, mock_gfc):
# no /proc/1/comm, ps returns '/sbin/sys11' which we dont know
# should end up return 'sys11'
module = self._mock_module()
module.run_command = Mock(return_value=(0, '/sbin/sys11', ''))
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module)
self.assertIsInstance(facts_dict, dict)
self.assertEqual(facts_dict['service_mgr'], 'sys11')
@patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
def test_clowncar(self, mock_gfc):
# no /proc/1/comm, ps fails, distro and system are clowncar
# should end up return 'sys11'
module = self._mock_module()
module.run_command = Mock(return_value=(1, '', ''))
collected_facts = {'distribution': 'clowncar',
'system': 'ClownCarOS'}
fact_collector = self.collector_class()
facts_dict = fact_collector.collect(module=module,
collected_facts=collected_facts)
self.assertIsInstance(facts_dict, dict)
self.assertEqual(facts_dict['service_mgr'], 'service')
# TODO: reenable these tests when we can mock more easily
# @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
# def test_sunos_fallback(self, mock_gfc):
# # no /proc/1/comm, ps fails, 'system' is SunOS
# # should end up return 'smf'?
# module = self._mock_module()
# # FIXME: the result here is a kluge to at least cover more of service_mgr.collect
# # TODO: remove
# # FIXME: have to force a pid for results here to get into any of the system/distro checks
# module.run_command = Mock(return_value=(1, ' 37 ', ''))
# collected_facts = {'system': 'SunOS'}
# fact_collector = self.collector_class(module=module)
# facts_dict = fact_collector.collect(collected_facts=collected_facts)
# print('facts_dict: %s' % facts_dict)
# self.assertIsInstance(facts_dict, dict)
# self.assertEqual(facts_dict['service_mgr'], 'smf')
# @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
# def test_aix_fallback(self, mock_gfc):
# # no /proc/1/comm, ps fails, 'system' is SunOS
# # should end up return 'smf'?
# module = self._mock_module()
# module.run_command = Mock(return_value=(1, '', ''))
# collected_facts = {'system': 'AIX'}
# fact_collector = self.collector_class(module=module)
# facts_dict = fact_collector.collect(collected_facts=collected_facts)
# print('facts_dict: %s' % facts_dict)
# self.assertIsInstance(facts_dict, dict)
# self.assertEqual(facts_dict['service_mgr'], 'src')
# @patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value=None)
# def test_linux_fallback(self, mock_gfc):
# # no /proc/1/comm, ps fails, 'system' is SunOS
# # should end up return 'smf'?
# module = self._mock_module()
# module.run_command = Mock(return_value=(1, ' 37 ', ''))
# collected_facts = {'system': 'Linux'}
# fact_collector = self.collector_class(module=module)
# facts_dict = fact_collector.collect(collected_facts=collected_facts)
# print('facts_dict: %s' % facts_dict)
# self.assertIsInstance(facts_dict, dict)
# self.assertEqual(facts_dict['service_mgr'], 'sdfadf')
class TestSshPubKeyFactCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'ssh_pub_keys']
valid_subsets = ['ssh_pub_keys']
fact_namespace = 'ansible_ssh_pub_leys'
collector_class = SshPubKeyFactCollector
class TestUserFactCollector(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'user']
valid_subsets = ['user']
fact_namespace = 'ansible_user'
collector_class = UserFactCollector
class TestVirtualFacts(BaseFactsTest):
__test__ = True
gather_subset = ['!all', 'virtual']
valid_subsets = ['virtual']
fact_namespace = 'ansible_virtual'
collector_class = VirtualCollector
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,349 |
omit does not work on includes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
include_role complains if parameter vars_from is given but no vars file exists
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /adm/afroebel/ansible/unix/ansible.cfg
configured module search path = [u'/adm/afroebel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.9 (default, Sep 14 2019, 20:00:08) [GCC 4.9.2]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/adm/afroebel/ansible/unix/ansible.cfg) = explicit
DEFAULT_MANAGED_STR(/adm/afroebel/ansible/unix/ansible.cfg) = {file}
DEFAULT_PRIVATE_ROLE_VARS(/adm/afroebel/ansible/unix/ansible.cfg) = True
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/adm/afroebel/ansible/unix/test']
PERSISTENT_CONNECT_TIMEOUT(/adm/afroebel/ansible/unix/ansible.cfg) = 30
```
##### OS / ENVIRONMENT
Ansible controller: Debian GNU/Linux 8
managed devices: Oracle Solaris 11.3/11.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Here's a simple Ansible playbook file test/include_role.yml with a single include_role task:
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
```
The role consists of the following files and directories and only executes one debug task, which outputs the role name:
```
drwxr-xr-x 4 ... test/roles/test_role/solaris
drwxr-xr-x 2 ... test/roles/test_role/solaris/tasks
-rw-r--r-- 1 ... test/roles/test_role/solaris/tasks/main.yml
drwxr-xr-x 2 ... test/roles/test_role/solaris/vars
```
When executed as follow all works well:
```
# ansible-playbook -i inventory test/include_role.yml -l test07
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
TASK [test_role/solaris : debug] **************************************************
ok: [test07] => {
"msg": "test_role/solaris"
}
PLAY RECAP ************************************************************************
test07 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Now we slightly change the Playbook by adding the vars_from parameter to the include_role statement and setting it to the default value of "main".
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
vars_from: "main"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As we set the vars_from parameter to the default value we would expect the playbook to run the same way as without the parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead it complains about the vars file not being found.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
ERROR! Could not find specified file in role: vars/main
PLAY RECAP ************************************************************************
```
|
https://github.com/ansible/ansible/issues/66349
|
https://github.com/ansible/ansible/pull/74879
|
605b1a1c5c3f551835a8228149df0f15e3c4d06d
|
840825b79c83af67a87eec00ba05b7d2d63d2553
| 2020-01-10T14:14:59Z |
python
| 2021-06-03T19:07:16Z |
changelogs/fragments/66349-include-role-omit.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,349 |
omit does not work on includes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
include_role complains if parameter vars_from is given but no vars file exists
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /adm/afroebel/ansible/unix/ansible.cfg
configured module search path = [u'/adm/afroebel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.9 (default, Sep 14 2019, 20:00:08) [GCC 4.9.2]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/adm/afroebel/ansible/unix/ansible.cfg) = explicit
DEFAULT_MANAGED_STR(/adm/afroebel/ansible/unix/ansible.cfg) = {file}
DEFAULT_PRIVATE_ROLE_VARS(/adm/afroebel/ansible/unix/ansible.cfg) = True
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/adm/afroebel/ansible/unix/test']
PERSISTENT_CONNECT_TIMEOUT(/adm/afroebel/ansible/unix/ansible.cfg) = 30
```
##### OS / ENVIRONMENT
Ansible controller: Debian GNU/Linux 8
managed devices: Oracle Solaris 11.3/11.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Here's a simple Ansible playbook file test/include_role.yml with a single include_role task:
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
```
The role consists of the following files and directories and only executes one debug task, which outputs the role name:
```
drwxr-xr-x 4 ... test/roles/test_role/solaris
drwxr-xr-x 2 ... test/roles/test_role/solaris/tasks
-rw-r--r-- 1 ... test/roles/test_role/solaris/tasks/main.yml
drwxr-xr-x 2 ... test/roles/test_role/solaris/vars
```
When executed as follow all works well:
```
# ansible-playbook -i inventory test/include_role.yml -l test07
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
TASK [test_role/solaris : debug] **************************************************
ok: [test07] => {
"msg": "test_role/solaris"
}
PLAY RECAP ************************************************************************
test07 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Now we slightly change the Playbook by adding the vars_from parameter to the include_role statement and setting it to the default value of "main".
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
vars_from: "main"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As we set the vars_from parameter to the default value we would expect the playbook to run the same way as without the parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead it complains about the vars file not being found.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
ERROR! Could not find specified file in role: vars/main
PLAY RECAP ************************************************************************
```
|
https://github.com/ansible/ansible/issues/66349
|
https://github.com/ansible/ansible/pull/74879
|
605b1a1c5c3f551835a8228149df0f15e3c4d06d
|
840825b79c83af67a87eec00ba05b7d2d63d2553
| 2020-01-10T14:14:59Z |
python
| 2021-06-03T19:07:16Z |
lib/ansible/playbook/included_file.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class IncludedFile:
def __init__(self, filename, args, vars, task, is_role=False):
self._filename = filename
self._args = args
self._vars = vars
self._task = task
self._hosts = []
self._is_role = is_role
def add_host(self, host):
if host not in self._hosts:
self._hosts.append(host)
return
raise ValueError()
def __eq__(self, other):
return (other._filename == self._filename and
other._args == self._args and
other._vars == self._vars and
other._task._uuid == self._task._uuid and
other._task._parent._uuid == self._task._parent._uuid)
def __repr__(self):
return "%s (args=%s vars=%s): %s" % (self._filename, self._args, self._vars, self._hosts)
@staticmethod
def process_include_results(results, iterator, loader, variable_manager):
included_files = []
task_vars_cache = {}
for res in results:
original_host = res._host
original_task = res._task
if original_task.action in C._ACTION_ALL_INCLUDES:
if original_task.action in C._ACTION_INCLUDE:
display.deprecated('"include" is deprecated, use include_tasks/import_tasks/import_playbook instead', "2.16")
if original_task.loop:
if 'results' not in res._result:
continue
include_results = res._result['results']
else:
include_results = [res._result]
for include_result in include_results:
# if the task result was skipped or failed, continue
if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result and include_result['failed']:
continue
cache_key = (iterator._play, original_host, original_task)
try:
task_vars = task_vars_cache[cache_key]
except KeyError:
task_vars = task_vars_cache[cache_key] = variable_manager.get_vars(play=iterator._play, host=original_host, task=original_task)
include_args = include_result.get('include_args', dict())
special_vars = {}
loop_var = include_result.get('ansible_loop_var', 'item')
index_var = include_result.get('ansible_index_var')
if loop_var in include_result:
task_vars[loop_var] = special_vars[loop_var] = include_result[loop_var]
if index_var and index_var in include_result:
task_vars[index_var] = special_vars[index_var] = include_result[index_var]
if '_ansible_item_label' in include_result:
task_vars['_ansible_item_label'] = special_vars['_ansible_item_label'] = include_result['_ansible_item_label']
if 'ansible_loop' in include_result:
task_vars['ansible_loop'] = special_vars['ansible_loop'] = include_result['ansible_loop']
if original_task.no_log and '_ansible_no_log' not in include_args:
task_vars['_ansible_no_log'] = special_vars['_ansible_no_log'] = original_task.no_log
# get search path for this task to pass to lookup plugins that may be used in pathing to
# the included file
task_vars['ansible_search_path'] = original_task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if loader.get_basedir() not in task_vars['ansible_search_path']:
task_vars['ansible_search_path'].append(loader.get_basedir())
templar = Templar(loader=loader, variables=task_vars)
if original_task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_file = None
if original_task._parent:
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = original_task._parent
cumulative_path = None
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
if isinstance(parent_include, IncludeRole):
parent_include_dir = parent_include._role_path
else:
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleError as e:
parent_include_dir = ''
display.warning(
'Templating the path of the parent %s failed. The path to the '
'included file may not be found. '
'The error was: %s.' % (original_task.action, to_text(e))
)
if cumulative_path is not None and not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
else:
cumulative_path = parent_include_dir
include_target = templar.template(include_result['include'])
if original_task._role:
new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path)
candidates = [loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target),
loader.path_dwim_relative(new_basedir, 'tasks', include_target)]
for include_file in candidates:
try:
# may throw OSError
os.stat(include_file)
# or select the task file if it exists
break
except OSError:
pass
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
break
else:
parent_include = parent_include._parent
if include_file is None:
if original_task._role:
include_target = templar.template(include_result['include'])
include_file = loader.path_dwim_relative(
original_task._role._role_path,
'handlers' if isinstance(original_task, Handler) else 'tasks',
include_target,
is_role=True)
else:
include_file = loader.path_dwim(include_result['include'])
include_file = templar.template(include_file)
inc_file = IncludedFile(include_file, include_args, special_vars, original_task)
else:
# template the included role's name here
role_name = include_args.pop('name', include_args.pop('role', None))
if role_name is not None:
role_name = templar.template(role_name)
new_task = original_task.copy()
new_task._role_name = role_name
for from_arg in new_task.FROM_ARGS:
if from_arg in include_args:
from_key = from_arg.replace('_from', '')
new_task._from_files[from_key] = templar.template(include_args.pop(from_arg))
inc_file = IncludedFile(role_name, include_args, special_vars, new_task, is_role=True)
idx = 0
orig_inc_file = inc_file
while 1:
try:
pos = included_files[idx:].index(orig_inc_file)
# pos is relative to idx since we are slicing
# use idx + pos due to relative indexing
inc_file = included_files[idx + pos]
except ValueError:
included_files.append(orig_inc_file)
inc_file = orig_inc_file
try:
inc_file.add_host(original_host)
except ValueError:
# The host already exists for this include, advance forward, this is a new include
idx += pos + 1
else:
break
return included_files
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,349 |
omit does not work on includes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
include_role complains if parameter vars_from is given but no vars file exists
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /adm/afroebel/ansible/unix/ansible.cfg
configured module search path = [u'/adm/afroebel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.9 (default, Sep 14 2019, 20:00:08) [GCC 4.9.2]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/adm/afroebel/ansible/unix/ansible.cfg) = explicit
DEFAULT_MANAGED_STR(/adm/afroebel/ansible/unix/ansible.cfg) = {file}
DEFAULT_PRIVATE_ROLE_VARS(/adm/afroebel/ansible/unix/ansible.cfg) = True
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/adm/afroebel/ansible/unix/test']
PERSISTENT_CONNECT_TIMEOUT(/adm/afroebel/ansible/unix/ansible.cfg) = 30
```
##### OS / ENVIRONMENT
Ansible controller: Debian GNU/Linux 8
managed devices: Oracle Solaris 11.3/11.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Here's a simple Ansible playbook file test/include_role.yml with a single include_role task:
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
```
The role consists of the following files and directories and only executes one debug task, which outputs the role name:
```
drwxr-xr-x 4 ... test/roles/test_role/solaris
drwxr-xr-x 2 ... test/roles/test_role/solaris/tasks
-rw-r--r-- 1 ... test/roles/test_role/solaris/tasks/main.yml
drwxr-xr-x 2 ... test/roles/test_role/solaris/vars
```
When executed as follow all works well:
```
# ansible-playbook -i inventory test/include_role.yml -l test07
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
TASK [test_role/solaris : debug] **************************************************
ok: [test07] => {
"msg": "test_role/solaris"
}
PLAY RECAP ************************************************************************
test07 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Now we slightly change the Playbook by adding the vars_from parameter to the include_role statement and setting it to the default value of "main".
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
vars_from: "main"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As we set the vars_from parameter to the default value we would expect the playbook to run the same way as without the parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead it complains about the vars file not being found.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
ERROR! Could not find specified file in role: vars/main
PLAY RECAP ************************************************************************
```
|
https://github.com/ansible/ansible/issues/66349
|
https://github.com/ansible/ansible/pull/74879
|
605b1a1c5c3f551835a8228149df0f15e3c4d06d
|
840825b79c83af67a87eec00ba05b7d2d63d2553
| 2020-01-10T14:14:59Z |
python
| 2021-06-03T19:07:16Z |
test/integration/targets/include_import/include_role_omit/playbook.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,349 |
omit does not work on includes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
include_role complains if parameter vars_from is given but no vars file exists
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /adm/afroebel/ansible/unix/ansible.cfg
configured module search path = [u'/adm/afroebel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.9 (default, Sep 14 2019, 20:00:08) [GCC 4.9.2]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/adm/afroebel/ansible/unix/ansible.cfg) = explicit
DEFAULT_MANAGED_STR(/adm/afroebel/ansible/unix/ansible.cfg) = {file}
DEFAULT_PRIVATE_ROLE_VARS(/adm/afroebel/ansible/unix/ansible.cfg) = True
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/adm/afroebel/ansible/unix/test']
PERSISTENT_CONNECT_TIMEOUT(/adm/afroebel/ansible/unix/ansible.cfg) = 30
```
##### OS / ENVIRONMENT
Ansible controller: Debian GNU/Linux 8
managed devices: Oracle Solaris 11.3/11.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Here's a simple Ansible playbook file test/include_role.yml with a single include_role task:
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
```
The role consists of the following files and directories and only executes one debug task, which outputs the role name:
```
drwxr-xr-x 4 ... test/roles/test_role/solaris
drwxr-xr-x 2 ... test/roles/test_role/solaris/tasks
-rw-r--r-- 1 ... test/roles/test_role/solaris/tasks/main.yml
drwxr-xr-x 2 ... test/roles/test_role/solaris/vars
```
When executed as follow all works well:
```
# ansible-playbook -i inventory test/include_role.yml -l test07
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
TASK [test_role/solaris : debug] **************************************************
ok: [test07] => {
"msg": "test_role/solaris"
}
PLAY RECAP ************************************************************************
test07 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Now we slightly change the Playbook by adding the vars_from parameter to the include_role statement and setting it to the default value of "main".
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
vars_from: "main"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As we set the vars_from parameter to the default value we would expect the playbook to run the same way as without the parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead it complains about the vars file not being found.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
ERROR! Could not find specified file in role: vars/main
PLAY RECAP ************************************************************************
```
|
https://github.com/ansible/ansible/issues/66349
|
https://github.com/ansible/ansible/pull/74879
|
605b1a1c5c3f551835a8228149df0f15e3c4d06d
|
840825b79c83af67a87eec00ba05b7d2d63d2553
| 2020-01-10T14:14:59Z |
python
| 2021-06-03T19:07:16Z |
test/integration/targets/include_import/include_role_omit/roles/foo/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,349 |
omit does not work on includes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
include_role complains if parameter vars_from is given but no vars file exists
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /adm/afroebel/ansible/unix/ansible.cfg
configured module search path = [u'/adm/afroebel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.9 (default, Sep 14 2019, 20:00:08) [GCC 4.9.2]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/adm/afroebel/ansible/unix/ansible.cfg) = explicit
DEFAULT_MANAGED_STR(/adm/afroebel/ansible/unix/ansible.cfg) = {file}
DEFAULT_PRIVATE_ROLE_VARS(/adm/afroebel/ansible/unix/ansible.cfg) = True
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/adm/afroebel/ansible/unix/test']
PERSISTENT_CONNECT_TIMEOUT(/adm/afroebel/ansible/unix/ansible.cfg) = 30
```
##### OS / ENVIRONMENT
Ansible controller: Debian GNU/Linux 8
managed devices: Oracle Solaris 11.3/11.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Here's a simple Ansible playbook file test/include_role.yml with a single include_role task:
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
```
The role consists of the following files and directories and only executes one debug task, which outputs the role name:
```
drwxr-xr-x 4 ... test/roles/test_role/solaris
drwxr-xr-x 2 ... test/roles/test_role/solaris/tasks
-rw-r--r-- 1 ... test/roles/test_role/solaris/tasks/main.yml
drwxr-xr-x 2 ... test/roles/test_role/solaris/vars
```
When executed as follow all works well:
```
# ansible-playbook -i inventory test/include_role.yml -l test07
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
TASK [test_role/solaris : debug] **************************************************
ok: [test07] => {
"msg": "test_role/solaris"
}
PLAY RECAP ************************************************************************
test07 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Now we slightly change the Playbook by adding the vars_from parameter to the include_role statement and setting it to the default value of "main".
```yaml
---
- name: "Test Playbook"
hosts: "os_solaris"
tasks:
- include_role:
name: "test_role/solaris"
vars_from: "main"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As we set the vars_from parameter to the default value we would expect the playbook to run the same way as without the parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead it complains about the vars file not being found.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Test Playbook] **************************************************************
TASK [include_role : test_role/solaris] *******************************************
ERROR! Could not find specified file in role: vars/main
PLAY RECAP ************************************************************************
```
|
https://github.com/ansible/ansible/issues/66349
|
https://github.com/ansible/ansible/pull/74879
|
605b1a1c5c3f551835a8228149df0f15e3c4d06d
|
840825b79c83af67a87eec00ba05b7d2d63d2553
| 2020-01-10T14:14:59Z |
python
| 2021-06-03T19:07:16Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(printf "%03d " {1..39}); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
test "$(ANSIBLE_DEPRECATION_WARNINGS=1 ansible-playbook -i ../../inventory playbook/test_import_playbook.yml "$@" 2>&1 | grep -c '\[DEPRECATION WARNING\]: Additional parameters in import_playbook')" = 1
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
# https://github.com/ansible/ansible/issues/68515
ansible-playbook -v role/test_include_role_vars_from.yml 2>&1 | tee test_include_role_vars_from.out
test "$(grep -E -c 'Expected a string for vars_from but got' test_include_role_vars_from.out)" = 1
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion_fqcn.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance_fqcn.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
ANSIBLE_STRATEGY='linear' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
# https://github.com/ansible/ansible/issues/66764
ANSIBLE_HOST_PATTERN_MISMATCH=error ansible-playbook empty_group_warning/playbook.yml
ansible-playbook test_include_loop.yml "$@"
ansible-playbook test_include_loop_fqcn.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,875 |
Unnecessary trailing space in ansible --version output
|
### Summary
`ansible --version` puts a trailing space at the end of its first output line. The trailing space was introduced in #72287
This breaks ansible version detection in Vagrant. Though it would be possible to fix the version detection in Vagrant by trimming whitespace before comparing strings, I assume the addition of the trailing space was an accident since the PR doesn't indicate any intent to do so.
Since I already identified the lines which introduced this bug, I'll be happy to create a PR to remove the trailing space as well. Just waiting for your confirmation that this is indeed a bug to fix and not intended behavior.
### Issue Type
Bug Report
### Component Name
ansible --version
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /home/fs-dev/Development/VagrantVMs/ansible.cfg
configured module search path = ['/home/fs-dev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/fs-dev/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/fs-dev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/fs-dev/.local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CALLBACKS_ENABLED(/home/fs-dev/Development/VagrantVMs/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer']
```
### OS / Environment
Ubuntu 20.04.2 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible --version
```
First line of output ends on trailing space
### Expected Results
`ansible --version` should output its version without trailing space:
`ansible [core 2.11.1]`
### Actual Results
```console
ansible [core 2.11.1]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74875
|
https://github.com/ansible/ansible/pull/74880
|
840825b79c83af67a87eec00ba05b7d2d63d2553
|
8f82e6327f57d351a0016f44ef47b651393148d3
| 2021-06-01T16:30:30Z |
python
| 2021-06-03T19:07:54Z |
changelogs/fragments/74875_ansible_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,875 |
Unnecessary trailing space in ansible --version output
|
### Summary
`ansible --version` puts a trailing space at the end of its first output line. The trailing space was introduced in #72287
This breaks ansible version detection in Vagrant. Though it would be possible to fix the version detection in Vagrant by trimming whitespace before comparing strings, I assume the addition of the trailing space was an accident since the PR doesn't indicate any intent to do so.
Since I already identified the lines which introduced this bug, I'll be happy to create a PR to remove the trailing space as well. Just waiting for your confirmation that this is indeed a bug to fix and not intended behavior.
### Issue Type
Bug Report
### Component Name
ansible --version
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /home/fs-dev/Development/VagrantVMs/ansible.cfg
configured module search path = ['/home/fs-dev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/fs-dev/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/fs-dev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/fs-dev/.local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CALLBACKS_ENABLED(/home/fs-dev/Development/VagrantVMs/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer']
```
### OS / Environment
Ubuntu 20.04.2 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible --version
```
First line of output ends on trailing space
### Expected Results
`ansible --version` should output its version without trailing space:
`ansible [core 2.11.1]`
### Actual Results
```console
ansible [core 2.11.1]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74875
|
https://github.com/ansible/ansible/pull/74880
|
840825b79c83af67a87eec00ba05b7d2d63d2553
|
8f82e6327f57d351a0016f44ef47b651393148d3
| 2021-06-01T16:30:30Z |
python
| 2021-06-03T19:07:54Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.module_utils.common.yaml import HAS_LIBYAML, yaml_load
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value)
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
with open(repo_path) as f:
gitdir = yaml_load(f).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}] ".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s" % ''.join(sys.version.splitlines()))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="verbose mode (-vvv for more, -vvvv to enable connection debugging)")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.config.get_config_value('PLAYBOOK_DIR'), dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory."
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
# ssh only
connect_group.add_argument('--ssh-common-args', default='', dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default='', dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default='', dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default='', dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
add_runas_prompt_options(parser, runas_group=runas_group)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is None:
runas_group = parser.add_argument_group("Privilege Escalation Options",
"control how and which user you become as on target hosts")
runas_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
parser.add_argument_group(runas_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,875 |
Unnecessary trailing space in ansible --version output
|
### Summary
`ansible --version` puts a trailing space at the end of its first output line. The trailing space was introduced in #72287
This breaks ansible version detection in Vagrant. Though it would be possible to fix the version detection in Vagrant by trimming whitespace before comparing strings, I assume the addition of the trailing space was an accident since the PR doesn't indicate any intent to do so.
Since I already identified the lines which introduced this bug, I'll be happy to create a PR to remove the trailing space as well. Just waiting for your confirmation that this is indeed a bug to fix and not intended behavior.
### Issue Type
Bug Report
### Component Name
ansible --version
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /home/fs-dev/Development/VagrantVMs/ansible.cfg
configured module search path = ['/home/fs-dev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/fs-dev/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/fs-dev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/fs-dev/.local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CALLBACKS_ENABLED(/home/fs-dev/Development/VagrantVMs/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer']
```
### OS / Environment
Ubuntu 20.04.2 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible --version
```
First line of output ends on trailing space
### Expected Results
`ansible --version` should output its version without trailing space:
`ansible [core 2.11.1]`
### Actual Results
```console
ansible [core 2.11.1]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74875
|
https://github.com/ansible/ansible/pull/74880
|
840825b79c83af67a87eec00ba05b7d2d63d2553
|
8f82e6327f57d351a0016f44ef47b651393148d3
| 2021-06-01T16:30:30Z |
python
| 2021-06-03T19:07:54Z |
test/units/cli/test_adhoc.py
|
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
import re
from ansible import context
from ansible.cli.adhoc import AdHocCLI, display
from ansible.errors import AnsibleOptionsError
def test_parse():
""" Test adhoc parse"""
with pytest.raises(ValueError, match='A non-empty list for args is required'):
adhoc_cli = AdHocCLI([])
adhoc_cli = AdHocCLI(['ansibletest'])
with pytest.raises(SystemExit):
adhoc_cli.parse()
def test_with_command():
""" Test simple adhoc command"""
module_name = 'command'
adhoc_cli = AdHocCLI(args=['ansible', '-m', module_name, '-vv', 'localhost'])
adhoc_cli.parse()
assert context.CLIARGS['module_name'] == module_name
assert display.verbosity == 2
def test_simple_command():
""" Test valid command and its run"""
adhoc_cli = AdHocCLI(['/bin/ansible', '-m', 'command', 'localhost', '-a', 'echo "hi"'])
adhoc_cli.parse()
ret = adhoc_cli.run()
assert ret == 0
def test_no_argument():
""" Test no argument command"""
adhoc_cli = AdHocCLI(['/bin/ansible', '-m', 'command', 'localhost'])
adhoc_cli.parse()
with pytest.raises(AnsibleOptionsError) as exec_info:
adhoc_cli.run()
assert 'No argument passed to command module' == str(exec_info.value)
def test_did_you_mean_playbook():
""" Test adhoc with yml file as argument parameter"""
adhoc_cli = AdHocCLI(['/bin/ansible', '-m', 'command', 'localhost.yml'])
adhoc_cli.parse()
with pytest.raises(AnsibleOptionsError) as exec_info:
adhoc_cli.run()
assert 'No argument passed to command module (did you mean to run ansible-playbook?)' == str(exec_info.value)
def test_play_ds_positive():
""" Test _play_ds"""
adhoc_cli = AdHocCLI(args=['/bin/ansible', 'localhost', '-m', 'command'])
adhoc_cli.parse()
ret = adhoc_cli._play_ds('command', 10, 2)
assert ret['name'] == 'Ansible Ad-Hoc'
assert ret['tasks'] == [{'action': {'module': 'command', 'args': {}}, 'async_val': 10, 'poll': 2, 'timeout': 0}]
def test_play_ds_with_include_role():
""" Test include_role command with poll"""
adhoc_cli = AdHocCLI(args=['/bin/ansible', 'localhost', '-m', 'include_role'])
adhoc_cli.parse()
ret = adhoc_cli._play_ds('include_role', None, 2)
assert ret['name'] == 'Ansible Ad-Hoc'
assert ret['gather_facts'] == 'no'
def test_run_import_playbook():
""" Test import_playbook which is not allowed with ad-hoc command"""
import_playbook = 'import_playbook'
adhoc_cli = AdHocCLI(args=['/bin/ansible', '-m', import_playbook, 'localhost'])
adhoc_cli.parse()
with pytest.raises(AnsibleOptionsError) as exec_info:
adhoc_cli.run()
assert context.CLIARGS['module_name'] == import_playbook
assert "'%s' is not a valid action for ad-hoc commands" % import_playbook == str(exec_info.value)
def test_run_no_extra_vars():
adhoc_cli = AdHocCLI(args=['/bin/ansible', 'localhost', '-e'])
with pytest.raises(SystemExit) as exec_info:
adhoc_cli.parse()
assert exec_info.value.code == 2
def test_ansible_version(capsys, mocker):
adhoc_cli = AdHocCLI(args=['/bin/ansible', '--version'])
with pytest.raises(SystemExit):
adhoc_cli.run()
version = capsys.readouterr()
try:
version_lines = version.out.splitlines()
except AttributeError:
# Python 2.6 does return a named tuple, so get the first item
version_lines = version[0].splitlines()
assert len(version_lines) == 9, 'Incorrect number of lines in "ansible --version" output'
assert re.match(r'ansible \[core [0-9.a-z]+\]', version_lines[0]), 'Incorrect ansible version line in "ansible --version" output'
assert re.match(' config file = .*$', version_lines[1]), 'Incorrect config file line in "ansible --version" output'
assert re.match(' configured module search path = .*$', version_lines[2]), 'Incorrect module search path in "ansible --version" output'
assert re.match(' ansible python module location = .*$', version_lines[3]), 'Incorrect python module location in "ansible --version" output'
assert re.match(' ansible collection location = .*$', version_lines[4]), 'Incorrect collection location in "ansible --version" output'
assert re.match(' executable location = .*$', version_lines[5]), 'Incorrect executable locaction in "ansible --version" output'
assert re.match(' python version = .*$', version_lines[6]), 'Incorrect python version in "ansible --version" output'
assert re.match(' jinja version = .*$', version_lines[7]), 'Incorrect jinja version in "ansible --version" output'
assert re.match(' libyaml = .*$', version_lines[8]), 'Missing libyaml in "ansible --version" output'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,946 |
apt_key module with http_proxy: gpg: Note: '--keyserver-options' is not considered an option
|
### Summary
When I run a ``apt_key`` task with ``http_proxy`` environment var set, I get error
```
gpg: Note: '--keyserver-options' is not considered an option
```
### Issue Type
Bug Report
### Component Name
apt_key
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.11.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible-pull.log
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
```
### OS / Environment
Ubuntu 20.04 LTS up to date
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: add geogebra key
apt_key:
id: '{{ item }}'
keyserver: hkp://keyserver.ubuntu.com:80
state: present
loop:
- C072A32983A736CF
```
in ``/etc/environment``
```
http_proxy=http://prodsif-pack.infra.domain.fr:3128
```
### Expected Results
Working task
### Actual Results
```console
TASK [apps/app_geogebra : add geogebra gpg keys] *******************************
task path: /root/playbooks-awx.git/roles/apps/app_geogebra/tasks/main.yml:2
<u5> ESTABLISH LOCAL CONNECTION FOR USER: root
<u5> EXEC /bin/sh -c 'echo ~root && sleep 0'
<u5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" && echo ansible-tmp-1623235057.261732-155825-75291458491294="` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" ) && sleep 0'
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/apt_key.py
<u5> PUT /root/.ansible/tmp/ansible-local-155661_67x1is_/tmpc9ysozq9 TO /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py
<u5> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ > /dev/null 2>&1 && sleep 0'
failed: [u5] (item=C072A32983A736CF) => changed=false
ansible_loop_var: item
cmd: /usr/bin/apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
invocation:
module_args:
data: null
file: null
id: C072A32983A736CF
key: null
keyring: null
keyserver: hkp://keyserver.ubuntu.com:80
state: present
url: null
validate_certs: true
item: C072A32983A736CF
msg: 'Error fetching key C072A32983A736CF from keyserver: hkp://keyserver.ubuntu.com:80'
rc: 2
stderr: |-
Warning: apt-key output should not be parsed (stdout is not a terminal)
gpg: Note: '--keyserver-options' is not considered an option
gpg: "--keyserver-options" not a key ID: skipping
gpg: "http-proxy=http://prodsif-pack.infra.domain.fr:3128" not a key ID: skipping
gpg: key C072A32983A736CF: public key "International GeoGebra Institute <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1
stderr_lines: <omitted>
stdout: |-
Executing: /tmp/apt-key-gpghome.FgNbASA1Dj/gpg.1.sh --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
stdout_lines: <omitted>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74946
|
https://github.com/ansible/ansible/pull/74949
|
81ad125aa65ef6c1c1c4137f49c2f1c91bca7d2b
|
50e998e30362c02d89115e5933ee2b3af2d05edd
| 2021-06-09T10:42:23Z |
python
| 2021-06-10T19:47:59Z |
changelogs/fragments/74949-apt_key_recv_last_arg.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,946 |
apt_key module with http_proxy: gpg: Note: '--keyserver-options' is not considered an option
|
### Summary
When I run a ``apt_key`` task with ``http_proxy`` environment var set, I get error
```
gpg: Note: '--keyserver-options' is not considered an option
```
### Issue Type
Bug Report
### Component Name
apt_key
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.11.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible-pull.log
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
```
### OS / Environment
Ubuntu 20.04 LTS up to date
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: add geogebra key
apt_key:
id: '{{ item }}'
keyserver: hkp://keyserver.ubuntu.com:80
state: present
loop:
- C072A32983A736CF
```
in ``/etc/environment``
```
http_proxy=http://prodsif-pack.infra.domain.fr:3128
```
### Expected Results
Working task
### Actual Results
```console
TASK [apps/app_geogebra : add geogebra gpg keys] *******************************
task path: /root/playbooks-awx.git/roles/apps/app_geogebra/tasks/main.yml:2
<u5> ESTABLISH LOCAL CONNECTION FOR USER: root
<u5> EXEC /bin/sh -c 'echo ~root && sleep 0'
<u5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" && echo ansible-tmp-1623235057.261732-155825-75291458491294="` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" ) && sleep 0'
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/apt_key.py
<u5> PUT /root/.ansible/tmp/ansible-local-155661_67x1is_/tmpc9ysozq9 TO /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py
<u5> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ > /dev/null 2>&1 && sleep 0'
failed: [u5] (item=C072A32983A736CF) => changed=false
ansible_loop_var: item
cmd: /usr/bin/apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
invocation:
module_args:
data: null
file: null
id: C072A32983A736CF
key: null
keyring: null
keyserver: hkp://keyserver.ubuntu.com:80
state: present
url: null
validate_certs: true
item: C072A32983A736CF
msg: 'Error fetching key C072A32983A736CF from keyserver: hkp://keyserver.ubuntu.com:80'
rc: 2
stderr: |-
Warning: apt-key output should not be parsed (stdout is not a terminal)
gpg: Note: '--keyserver-options' is not considered an option
gpg: "--keyserver-options" not a key ID: skipping
gpg: "http-proxy=http://prodsif-pack.infra.domain.fr:3128" not a key ID: skipping
gpg: key C072A32983A736CF: public key "International GeoGebra Institute <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1
stderr_lines: <omitted>
stdout: |-
Executing: /tmp/apt-key-gpghome.FgNbASA1Dj/gpg.1.sh --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
stdout_lines: <omitted>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74946
|
https://github.com/ansible/ansible/pull/74949
|
81ad125aa65ef6c1c1c4137f49c2f1c91bca7d2b
|
50e998e30362c02d89115e5933ee2b3af2d05edd
| 2021-06-09T10:42:23Z |
python
| 2021-06-10T19:47:59Z |
lib/ansible/modules/apt_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2012, Jayson Vantuyl <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_key
author:
- Jayson Vantuyl (@jvantuyl)
version_added: "1.0"
short_description: Add or remove an apt key
description:
- Add or remove an I(apt) key, optionally downloading it.
notes:
- The apt-key command has been deprecated and suggests to 'manage keyring files in trusted.gpg.d instead'. See the Debian wiki for details.
This module is kept for backwards compatiblity for systems that still use apt-key as the main way to manage apt repository keys.
- As a sanity check, downloaded key id must match the one specified.
- "Use full fingerprint (40 characters) key ids to avoid key collisions.
To generate a full-fingerprint imported key: C(apt-key adv --list-public-keys --with-fingerprint --with-colons)."
- If you specify both the key id and the URL with C(state=present), the task can verify or add the key as needed.
- Adding a new key requires an apt cache update (e.g. using the M(ansible.builtin.apt) module's update_cache option).
- Supports C(check_mode).
requirements:
- gpg
options:
id:
description:
- The identifier of the key.
- Including this allows check mode to correctly report the changed state.
- If specifying a subkey's id be aware that apt-key does not understand how to remove keys via a subkey id. Specify the primary key's id instead.
- This parameter is required when C(state) is set to C(absent).
type: str
data:
description:
- The keyfile contents to add to the keyring.
type: str
file:
description:
- The path to a keyfile on the remote server to add to the keyring.
type: path
keyring:
description:
- The full path to specific keyring file in C(/etc/apt/trusted.gpg.d/).
type: path
version_added: "1.3"
url:
description:
- The URL to retrieve key from.
type: str
keyserver:
description:
- The keyserver to retrieve key from.
type: str
version_added: "1.6"
state:
description:
- Ensures that the key is present (added) or absent (revoked).
type: str
choices: [ absent, present ]
default: present
validate_certs:
description:
- If C(no), SSL certificates for the target url will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
'''
EXAMPLES = '''
- name: Add an apt key by id from a keyserver
ansible.builtin.apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: Add an Apt signing key, uses whichever key is at the URL
ansible.builtin.apt_key:
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Add an Apt signing key, will not download if present
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Remove a Apt specific signing key, leading 0x is valid
ansible.builtin.apt_key:
id: 0x9FED2BCBDCD29CDF762678CBAED4B06F473041FA
state: absent
# Use armored file since utf-8 string is expected. Must be of "PGP PUBLIC KEY BLOCK" type.
- name: Add a key from a file on the Ansible server
ansible.builtin.apt_key:
data: "{{ lookup('file', 'apt.asc') }}"
state: present
- name: Add an Apt signing key to a specific keyring file
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
keyring: /etc/apt/trusted.gpg.d/debian.gpg
- name: Add Apt signing key on remote server to keyring
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
file: /tmp/apt.gpg
state: present
'''
RETURN = '''
after:
description: List of apt key ids or fingerprints after any modification
returned: on change
type: list
sample: ["D8576A8BA88D21E9", "3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
before:
description: List of apt key ids or fingprints before any modifications
returned: always
type: list
sample: ["3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
fp:
description: Fingerprint of the key to import
returned: always
type: str
sample: "D8576A8BA88D21E9"
id:
description: key id from source
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
key_id:
description: calculated key id, it should be same as 'id', but can be different
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
short_id:
description: caclulated short key id
returned: always
type: str
sample: "A88D21E9"
'''
import os
# FIXME: standardize into module_common
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url
apt_key_bin = None
gpg_bin = None
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
def find_needed_binaries(module):
global apt_key_bin
global gpg_bin
apt_key_bin = module.get_bin_path('apt-key', required=True)
gpg_bin = module.get_bin_path('gpg', required=True)
def add_http_proxy(cmd):
for envvar in ('HTTPS_PROXY', 'https_proxy', 'HTTP_PROXY', 'http_proxy'):
proxy = os.environ.get(envvar)
if proxy:
break
if proxy:
cmd += ' --keyserver-options http-proxy=%s' % proxy
return cmd
def parse_key_id(key_id):
"""validate the key_id and break it into segments
:arg key_id: The key_id as supplied by the user. A valid key_id will be
8, 16, or more hexadecimal chars with an optional leading ``0x``.
:returns: The portion of key_id suitable for apt-key del, the portion
suitable for comparisons with --list-public-keys, and the portion that
can be used with --recv-key. If key_id is long enough, these will be
the last 8 characters of key_id, the last 16 characters, and all of
key_id. If key_id is not long enough, some of the values will be the
same.
* apt-key del <= 1.10 has a bug with key_id != 8 chars
* apt-key adv --list-public-keys prints 16 chars
* apt-key adv --recv-key can take more chars
"""
# Make sure the key_id is valid hexadecimal
int(to_native(key_id), 16)
key_id = key_id.upper()
if key_id.startswith('0X'):
key_id = key_id[2:]
key_id_len = len(key_id)
if (key_id_len != 8 and key_id_len != 16) and key_id_len <= 16:
raise ValueError('key_id must be 8, 16, or 16+ hexadecimal characters in length')
short_key_id = key_id[-8:]
fingerprint = key_id
if key_id_len > 16:
fingerprint = key_id[-16:]
return short_key_id, fingerprint, key_id
def parse_output_for_keys(output, short_format=False):
found = []
lines = to_native(output).split('\n')
for line in lines:
if (line.startswith("pub") or line.startswith("sub")) and "expired" not in line:
try:
# apt key format
tokens = line.split()
code = tokens[1]
(len_type, real_code) = code.split("/")
except (IndexError, ValueError):
# gpg format
try:
tokens = line.split(':')
real_code = tokens[4]
except (IndexError, ValueError):
# invalid line, skip
continue
found.append(real_code)
if found and short_format:
found = shorten_key_ids(found)
return found
def all_keys(module, keyring, short_format):
if keyring is not None:
cmd = "%s --keyring %s adv --list-public-keys --keyid-format=long" % (apt_key_bin, keyring)
else:
cmd = "%s adv --list-public-keys --keyid-format=long" % apt_key_bin
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(msg="Unable to list public keys", cmd=cmd, rc=rc, stdout=out, stderr=err)
return parse_output_for_keys(out, short_format)
def shorten_key_ids(key_id_list):
"""
Takes a list of key ids, and converts them to the 'short' format,
by reducing them to their last 8 characters.
"""
short = []
for key in key_id_list:
short.append(key[-8:])
return short
def download_key(module, url):
try:
# note: validate_certs and other args are pulled from module directly
rsp, info = fetch_url(module, url, use_proxy=True)
if info['status'] != 200:
module.fail_json(msg="Failed to download key at %s: %s" % (url, info['msg']))
return rsp.read()
except Exception:
module.fail_json(msg="error getting key id from url: %s" % url, traceback=format_exc())
def get_key_id_from_file(module, filename, data=None):
native_data = to_native(data)
is_armored = native_data.find("-----BEGIN PGP PUBLIC KEY BLOCK-----") >= 0
global lang_env
key = None
cmd = [gpg_bin, '--with-colons', filename]
(rc, out, err) = module.run_command(cmd, environ_update=lang_env, data=(native_data if is_armored else data), binary_data=not is_armored)
if rc != 0:
module.fail_json(msg="Unable to extract key from '%s'" % ('inline data' if data is not None else filename), stdout=out, stderr=err)
keys = parse_output_for_keys(out)
# assume we only want first key?
if keys:
key = keys[0]
return key
def get_key_id_from_data(module, data):
return get_key_id_from_file(module, '-', data)
def import_key(module, keyring, keyserver, key_id):
global lang_env
if keyring:
cmd = "%s --keyring %s adv --no-tty --keyserver %s --recv %s" % (apt_key_bin, keyring, keyserver, key_id)
else:
cmd = "%s adv --no-tty --keyserver %s --recv %s" % (apt_key_bin, keyserver, key_id)
# check for proxy
cmd = add_http_proxy(cmd)
for retry in range(5):
(rc, out, err) = module.run_command(cmd, environ_update=lang_env)
if rc == 0:
break
else:
# Out of retries
if rc == 2 and 'not found on keyserver' in out:
msg = 'Key %s not found on keyserver %s' % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env)
else:
msg = "Error fetching key %s from keyserver: %s" % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env, rc=rc, stdout=out, stderr=err)
return True
def add_key(module, keyfile, keyring, data=None):
if data is not None:
if keyring:
cmd = "%s --keyring %s add -" % (apt_key_bin, keyring)
else:
cmd = "%s add -" % apt_key_bin
(rc, out, err) = module.run_command(cmd, data=data, binary_data=True)
if rc != 0:
module.fail_json(
msg="Unable to add a key from binary data",
cmd=cmd,
rc=rc,
stdout=out,
stderr=err,
)
else:
if keyring:
cmd = "%s --keyring %s add %s" % (apt_key_bin, keyring, keyfile)
else:
cmd = "%s add %s" % (apt_key_bin, keyfile)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to add a key from file %s" % (keyfile),
cmd=cmd,
rc=rc,
keyfile=keyfile,
stdout=out,
stderr=err,
)
return True
def remove_key(module, key_id, keyring):
if keyring:
cmd = '%s --keyring %s del %s' % (apt_key_bin, keyring, key_id)
else:
cmd = '%s del %s' % (apt_key_bin, key_id)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to remove a key with id %s" % (key_id),
cmd=cmd,
rc=rc,
key_id=key_id,
stdout=out,
stderr=err,
)
return True
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str'),
url=dict(type='str'),
data=dict(type='str'),
file=dict(type='path'),
key=dict(type='str', removed_in_version='2.14', removed_from_collection='ansible.builtin', no_log=False),
keyring=dict(type='path'),
validate_certs=dict(type='bool', default=True),
keyserver=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
),
supports_check_mode=True,
mutually_exclusive=(('data', 'file', 'keyserver', 'url'),),
)
# parameters
key_id = module.params['id']
url = module.params['url']
data = module.params['data']
filename = module.params['file']
keyring = module.params['keyring']
state = module.params['state']
keyserver = module.params['keyserver']
# internal vars
short_format = False
short_key_id = None
fingerprint = None
error_no_error = "apt-key did not return an error, but %s (check that the id is correct and *not* a subkey)"
# ensure we have requirements met
find_needed_binaries(module)
# initialize result dict
r = {'changed': False}
if not key_id:
if keyserver:
module.fail_json(msg="Missing key_id, required with keyserver.")
if url:
data = download_key(module, url)
if filename:
key_id = get_key_id_from_file(module, filename)
elif data:
key_id = get_key_id_from_data(module, data)
r['id'] = key_id
try:
short_key_id, fingerprint, key_id = parse_key_id(key_id)
r['short_id'] = short_key_id
r['fp'] = fingerprint
r['key_id'] = key_id
except ValueError:
module.fail_json(msg='Invalid key_id', **r)
if not fingerprint:
# invalid key should fail well before this point, but JIC ...
module.fail_json(msg="Unable to continue as we could not extract a valid fingerprint to compare against existing keys.", **r)
if len(key_id) == 8:
short_format = True
# get existing keys to verify if we need to change
r['before'] = keys = all_keys(module, keyring, short_format)
keys2 = []
if state == 'present':
if (short_format and short_key_id not in keys) or (not short_format and fingerprint not in keys):
r['changed'] = True
if not module.check_mode:
if filename:
add_key(module, filename, keyring)
elif keyserver:
import_key(module, keyring, keyserver, key_id)
elif data:
# this also takes care of url if key_id was not provided
add_key(module, "-", keyring, data)
elif url:
# we hit this branch only if key_id is supplied with url
data = download_key(module, url)
add_key(module, "-", keyring, data)
else:
module.fail_json(msg="No key to add ... how did i get here?!?!", **r)
# verify it got added
r['after'] = keys2 = all_keys(module, keyring, short_format)
if (short_format and short_key_id not in keys2) or (not short_format and fingerprint not in keys2):
module.fail_json(msg=error_no_error % 'failed to add the key', **r)
elif state == 'absent':
if not key_id:
module.fail_json(msg="key is required to remove a key", **r)
if fingerprint in keys:
r['changed'] = True
if not module.check_mode:
# we use the "short" id: key_id[-8:], short_format=True
# it's a workaround for https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1481871
if short_key_id is not None and remove_key(module, short_key_id, keyring):
r['after'] = keys2 = all_keys(module, keyring, short_format)
if fingerprint in keys2:
module.fail_json(msg=error_no_error % 'the key was not removed', **r)
else:
module.fail_json(msg="error removing key_id", **r)
module.exit_json(**r)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,946 |
apt_key module with http_proxy: gpg: Note: '--keyserver-options' is not considered an option
|
### Summary
When I run a ``apt_key`` task with ``http_proxy`` environment var set, I get error
```
gpg: Note: '--keyserver-options' is not considered an option
```
### Issue Type
Bug Report
### Component Name
apt_key
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.11.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible-pull.log
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
```
### OS / Environment
Ubuntu 20.04 LTS up to date
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: add geogebra key
apt_key:
id: '{{ item }}'
keyserver: hkp://keyserver.ubuntu.com:80
state: present
loop:
- C072A32983A736CF
```
in ``/etc/environment``
```
http_proxy=http://prodsif-pack.infra.domain.fr:3128
```
### Expected Results
Working task
### Actual Results
```console
TASK [apps/app_geogebra : add geogebra gpg keys] *******************************
task path: /root/playbooks-awx.git/roles/apps/app_geogebra/tasks/main.yml:2
<u5> ESTABLISH LOCAL CONNECTION FOR USER: root
<u5> EXEC /bin/sh -c 'echo ~root && sleep 0'
<u5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" && echo ansible-tmp-1623235057.261732-155825-75291458491294="` echo /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294 `" ) && sleep 0'
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/apt_key.py
<u5> PUT /root/.ansible/tmp/ansible-local-155661_67x1is_/tmpc9ysozq9 TO /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py
<u5> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/AnsiballZ_apt_key.py && sleep 0'
<u5> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1623235057.261732-155825-75291458491294/ > /dev/null 2>&1 && sleep 0'
failed: [u5] (item=C072A32983A736CF) => changed=false
ansible_loop_var: item
cmd: /usr/bin/apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
invocation:
module_args:
data: null
file: null
id: C072A32983A736CF
key: null
keyring: null
keyserver: hkp://keyserver.ubuntu.com:80
state: present
url: null
validate_certs: true
item: C072A32983A736CF
msg: 'Error fetching key C072A32983A736CF from keyserver: hkp://keyserver.ubuntu.com:80'
rc: 2
stderr: |-
Warning: apt-key output should not be parsed (stdout is not a terminal)
gpg: Note: '--keyserver-options' is not considered an option
gpg: "--keyserver-options" not a key ID: skipping
gpg: "http-proxy=http://prodsif-pack.infra.domain.fr:3128" not a key ID: skipping
gpg: key C072A32983A736CF: public key "International GeoGebra Institute <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1
stderr_lines: <omitted>
stdout: |-
Executing: /tmp/apt-key-gpghome.FgNbASA1Dj/gpg.1.sh --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv C072A32983A736CF --keyserver-options http-proxy=http://prodsif-pack.infra.domain.fr:3128
stdout_lines: <omitted>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74946
|
https://github.com/ansible/ansible/pull/74949
|
81ad125aa65ef6c1c1c4137f49c2f1c91bca7d2b
|
50e998e30362c02d89115e5933ee2b3af2d05edd
| 2021-06-09T10:42:23Z |
python
| 2021-06-10T19:47:59Z |
test/units/modules/test_apt_key.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,922 |
`scp_if_ssh` configuration ignored
|
### Summary
The `scp_if_ssh` configuration is ignored due to a default value for `ssh_transfer_method`
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/connection/ssh.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Steps to Reproduce
https://github.com/ansible/ansible/blob/26827f50393b21a87b8640387cb77ee0801155ea/lib/ansible/plugins/connection/ssh.py#L274-L283
https://github.com/ansible/ansible/blob/26827f50393b21a87b8640387cb77ee0801155ea/lib/ansible/plugins/connection/ssh.py#L1173-L1194
Because `ssh_transfer_method` has a default of `smart` the `if ssh_transfer_method is not None:` is always `True`, meaning the `else` statement that handles `scp_if_ssh` is ignored.
`ssh_transfer_method` has no host/inventory var, but `scp_if_ssh` does. I found this targeting a host that does not support sftp.
### Expected Results
`scp_if_ssh` not ignored
### Actual Results
```console
`scp_if_ssh`ignored
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74922
|
https://github.com/ansible/ansible/pull/74925
|
50e998e30362c02d89115e5933ee2b3af2d05edd
|
675df166c27bc82a4d9a7cba45e11aec0300ae2c
| 2021-06-07T14:01:57Z |
python
| 2021-06-10T20:22:41Z |
changelogs/fragments/fix_scp_ssh_settings.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,922 |
`scp_if_ssh` configuration ignored
|
### Summary
The `scp_if_ssh` configuration is ignored due to a default value for `ssh_transfer_method`
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/connection/ssh.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Steps to Reproduce
https://github.com/ansible/ansible/blob/26827f50393b21a87b8640387cb77ee0801155ea/lib/ansible/plugins/connection/ssh.py#L274-L283
https://github.com/ansible/ansible/blob/26827f50393b21a87b8640387cb77ee0801155ea/lib/ansible/plugins/connection/ssh.py#L1173-L1194
Because `ssh_transfer_method` has a default of `smart` the `if ssh_transfer_method is not None:` is always `True`, meaning the `else` statement that handles `scp_if_ssh` is ignored.
`ssh_transfer_method` has no host/inventory var, but `scp_if_ssh` does. I found this targeting a host that does not support sftp.
### Expected Results
`scp_if_ssh` not ignored
### Actual Results
```console
`scp_if_ssh`ignored
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74922
|
https://github.com/ansible/ansible/pull/74925
|
50e998e30362c02d89115e5933ee2b3af2d05edd
|
675df166c27bc82a4d9a7cba45e11aec0300ae2c
| 2021-06-07T14:01:57Z |
python
| 2021-06-10T20:22:41Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
notes:
- Many options default to 'None' here but that only means we don't override the ssh tool's defaults and/or configuration.
For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config).
options:
host:
description: Hostname/ip to connect to.
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: delegated_vars['ansible_host']
- name: delegated_vars['ansible_ssh_host']
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description:
- Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
- Defaults to ``Enter PIN for`` when pkcs11_provider is set.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: scp_extra_args
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: sftp_extra_args
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
retries:
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
cli:
- name: user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
cli:
- name: private_key_file
option: '--private-key'
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null (default), ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
ssh_transfer_method:
default: smart
description:
- "Preferred method to use when transferring files over ssh"
- Setting to 'smart' (default) will try them in order, until one succeeds or they all fail
- Using 'piped' creates an ssh pipe with ``dd`` on either side to copy the data
choices: ['sftp', 'scp', 'piped', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
timeout:
default: 10
description:
- This is the default ammount of time we will wait while establishing an ssh connection
- It also controls how long we can wait to access reading the connection once established (select on the socket)
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
version_added: '2.11'
ini:
- key: timeout
section: defaults
- key: timeout
section: ssh_connection
version_added: '2.11'
vars:
- name: ansible_ssh_timeout
version_added: '2.11'
cli:
- name: timeout
type: integer
pkcs11_provider:
version_added: '2.12'
default: ""
description:
- "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so"
- Requires sshpass version 1.06+, sshpass must support the -P option
env: [{name: ANSIBLE_PKCS11_PROVIDER}]
ini:
- {key: pkcs11_provider, section: ssh_connection}
vars:
- name: ansible_ssh_pkcs11_provider
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(self.get_option('retries')) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
# TODO: this should come from task
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
# TODO: all should come from get_option(), but not might be set at this point yet
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = None
self.control_path_dir = None
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
pkcs11_provider = self.get_option("pkcs11_provider")
if conn_password or pkcs11_provider:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program")
if not conn_password and pkcs11_provider:
raise AnsibleError("to use pkcs11_provider you must specify a password/pin")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if not password_prompt and pkcs11_provider:
# Set default password prompt for pkcs11_provider to make it clear its a PIN
password_prompt = 'Enter PIN for '
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# pkcs11 mode allows the use of Smartcards or Yubikey devices
if conn_password and pkcs11_provider:
self._add_args(b_command,
(b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=publickey",
b"-o", b"PasswordAuthentication=no",
b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)),
u'Enable pkcs11')
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and self.get_option('sftp_batch_mode'):
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
# Next, we add ssh_args
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments that have their own specific settings defined in docs above.
if not self.get_option('host_key_checking'):
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
self.port = self.get_option('port')
if self.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self.get_option('private_key_file')
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
self.user = self.get_option('remote_user')
if self.user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
timeout = self.get_option('timeout')
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = self.get_option(opt)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"Set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
self.control_path_dir = self.get_option('control_path_dir')
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
self.control_path = self.get_option('control_path')
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self.get_option('timeout')
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if self.get_option('host_key_checking'):
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self.get_option('ssh_transfer_method')
if ssh_transfer_method is not None:
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = self.get_option('scp_if_ssh')
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable')
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
run_reset = False
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host)
display.vvv(u'sending connection check: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.vvv(u"No connection to reset: %s" % to_text(stderr))
else:
run_reset = True
if run_reset:
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host)
display.vvv(u'sending connection stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,921 |
Inconsictency in fact naming, display, and filtering in setup module
|
### Summary
`setup` module displays facts inside `ansible_facts` dict, but with `ansible_` prefix, also `filter` acts on injected variable names instead of facts, expecting `ansible_` prefix.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Debian testing, FreeBSD 12
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible localhost -m setup -a 'filter=date_time'
```
### Expected Results
I expect facts to be named as is when inside `ansible_facts` dict, and with `ansible_` prefix when injected into variable space. Since `setup` module shows results as being inside `ansible_facts` dict, it should not add `ansible_` prefix to them. `filter` parameter should also act on fact names, not on injected variables. (or at least have a clear option for such behavior).
```
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {
"date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
### Actual Results
```console
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {},
"changed": false
}
```
```
$ ansible localhost -m setup -a 'filter=*date_time'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74921
|
https://github.com/ansible/ansible/pull/74924
|
cf3a304ce1711ed9a19549fbf58c004acd47c06a
|
d2d45900edc7f52cf3e3685d65f824a445cc8c41
| 2021-06-07T13:40:49Z |
python
| 2021-06-14T13:32:28Z |
changelogs/fragments/setup_filter_smarter.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,921 |
Inconsictency in fact naming, display, and filtering in setup module
|
### Summary
`setup` module displays facts inside `ansible_facts` dict, but with `ansible_` prefix, also `filter` acts on injected variable names instead of facts, expecting `ansible_` prefix.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Debian testing, FreeBSD 12
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible localhost -m setup -a 'filter=date_time'
```
### Expected Results
I expect facts to be named as is when inside `ansible_facts` dict, and with `ansible_` prefix when injected into variable space. Since `setup` module shows results as being inside `ansible_facts` dict, it should not add `ansible_` prefix to them. `filter` parameter should also act on fact names, not on injected variables. (or at least have a clear option for such behavior).
```
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {
"date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
### Actual Results
```console
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {},
"changed": false
}
```
```
$ ansible localhost -m setup -a 'filter=*date_time'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74921
|
https://github.com/ansible/ansible/pull/74924
|
cf3a304ce1711ed9a19549fbf58c004acd47c06a
|
d2d45900edc7f52cf3e3685d65f824a445cc8c41
| 2021-06-07T13:40:49Z |
python
| 2021-06-14T13:32:28Z |
changelogs/fragments/ssh_conn_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,921 |
Inconsictency in fact naming, display, and filtering in setup module
|
### Summary
`setup` module displays facts inside `ansible_facts` dict, but with `ansible_` prefix, also `filter` acts on injected variable names instead of facts, expecting `ansible_` prefix.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Debian testing, FreeBSD 12
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible localhost -m setup -a 'filter=date_time'
```
### Expected Results
I expect facts to be named as is when inside `ansible_facts` dict, and with `ansible_` prefix when injected into variable space. Since `setup` module shows results as being inside `ansible_facts` dict, it should not add `ansible_` prefix to them. `filter` parameter should also act on fact names, not on injected variables. (or at least have a clear option for such behavior).
```
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {
"date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
### Actual Results
```console
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {},
"changed": false
}
```
```
$ ansible localhost -m setup -a 'filter=*date_time'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74921
|
https://github.com/ansible/ansible/pull/74924
|
cf3a304ce1711ed9a19549fbf58c004acd47c06a
|
d2d45900edc7f52cf3e3685d65f824a445cc8c41
| 2021-06-07T13:40:49Z |
python
| 2021-06-14T13:32:28Z |
lib/ansible/module_utils/facts/ansible_collector.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2017 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import sys
from ansible.module_utils.facts import timeout
from ansible.module_utils.facts import collector
from ansible.module_utils.common.collections import is_string
class AnsibleFactCollector(collector.BaseFactCollector):
'''A FactCollector that returns results under 'ansible_facts' top level key.
If a namespace if provided, facts will be collected under that namespace.
For ex, a ansible.module_utils.facts.namespace.PrefixFactNamespace(prefix='ansible_')
Has a 'from_gather_subset() constructor that populates collectors based on a
gather_subset specifier.'''
def __init__(self, collectors=None, namespace=None, filter_spec=None):
super(AnsibleFactCollector, self).__init__(collectors=collectors,
namespace=namespace)
self.filter_spec = filter_spec
def _filter(self, facts_dict, filter_spec):
# assume filter_spec='' or filter_spec=[] is equivalent to filter_spec='*'
if not filter_spec or filter_spec == '*':
return facts_dict
if is_string(filter_spec):
filter_spec = [filter_spec]
return [(x, y) for x, y in facts_dict.items() for f in filter_spec if not f or fnmatch.fnmatch(x, f)]
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
for collector_obj in self.collectors:
info_dict = {}
try:
# Note: this collects with namespaces, so collected_facts also includes namespaces
info_dict = collector_obj.collect_with_namespace(module=module,
collected_facts=collected_facts)
except Exception as e:
sys.stderr.write(repr(e))
sys.stderr.write('\n')
# shallow copy of the new facts to pass to each collector in collected_facts so facts
# can reference other facts they depend on.
collected_facts.update(info_dict.copy())
# NOTE: If we want complicated fact dict merging, this is where it would hook in
facts_dict.update(self._filter(info_dict, self.filter_spec))
return facts_dict
class CollectorMetaDataCollector(collector.BaseFactCollector):
'''Collector that provides a facts with the gather_subset metadata.'''
name = 'gather_subset'
_fact_ids = set([])
def __init__(self, collectors=None, namespace=None, gather_subset=None, module_setup=None):
super(CollectorMetaDataCollector, self).__init__(collectors, namespace)
self.gather_subset = gather_subset
self.module_setup = module_setup
def collect(self, module=None, collected_facts=None):
meta_facts = {'gather_subset': self.gather_subset}
if self.module_setup:
meta_facts['module_setup'] = self.module_setup
return meta_facts
def get_ansible_collector(all_collector_classes,
namespace=None,
filter_spec=None,
gather_subset=None,
gather_timeout=None,
minimal_gather_subset=None):
filter_spec = filter_spec or []
gather_subset = gather_subset or ['all']
gather_timeout = gather_timeout or timeout.DEFAULT_GATHER_TIMEOUT
minimal_gather_subset = minimal_gather_subset or frozenset()
collector_classes = \
collector.collector_classes_from_gather_subset(
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset,
gather_subset=gather_subset,
gather_timeout=gather_timeout)
collectors = []
for collector_class in collector_classes:
collector_obj = collector_class(namespace=namespace)
collectors.append(collector_obj)
# Add a collector that knows what gather_subset we used so it it can provide a fact
collector_meta_data_collector = \
CollectorMetaDataCollector(gather_subset=gather_subset,
module_setup=True)
collectors.append(collector_meta_data_collector)
fact_collector = \
AnsibleFactCollector(collectors=collectors,
filter_spec=filter_spec,
namespace=namespace)
return fact_collector
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,921 |
Inconsictency in fact naming, display, and filtering in setup module
|
### Summary
`setup` module displays facts inside `ansible_facts` dict, but with `ansible_` prefix, also `filter` acts on injected variable names instead of facts, expecting `ansible_` prefix.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Debian testing, FreeBSD 12
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible localhost -m setup -a 'filter=date_time'
```
### Expected Results
I expect facts to be named as is when inside `ansible_facts` dict, and with `ansible_` prefix when injected into variable space. Since `setup` module shows results as being inside `ansible_facts` dict, it should not add `ansible_` prefix to them. `filter` parameter should also act on fact names, not on injected variables. (or at least have a clear option for such behavior).
```
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {
"date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
### Actual Results
```console
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {},
"changed": false
}
```
```
$ ansible localhost -m setup -a 'filter=*date_time'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74921
|
https://github.com/ansible/ansible/pull/74924
|
cf3a304ce1711ed9a19549fbf58c004acd47c06a
|
d2d45900edc7f52cf3e3685d65f824a445cc8c41
| 2021-06-07T13:40:49Z |
python
| 2021-06-14T13:32:28Z |
test/integration/targets/gathering_facts/inventory
|
[local]
facthost[0:25] ansible_connection=local ansible_python_interpreter="{{ ansible_playbook_python }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,921 |
Inconsictency in fact naming, display, and filtering in setup module
|
### Summary
`setup` module displays facts inside `ansible_facts` dict, but with `ansible_` prefix, also `filter` acts on injected variable names instead of facts, expecting `ansible_` prefix.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Debian testing, FreeBSD 12
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible localhost -m setup -a 'filter=date_time'
```
### Expected Results
I expect facts to be named as is when inside `ansible_facts` dict, and with `ansible_` prefix when injected into variable space. Since `setup` module shows results as being inside `ansible_facts` dict, it should not add `ansible_` prefix to them. `filter` parameter should also act on fact names, not on injected variables. (or at least have a clear option for such behavior).
```
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {
"date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
### Actual Results
```console
$ ansible localhost -m setup -a 'filter=date_time'
localhost | SUCCESS => {
"ansible_facts": {},
"changed": false
}
```
```
$ ansible localhost -m setup -a 'filter=*date_time'
localhost | SUCCESS => {
"ansible_facts": {
"ansible_date_time": {
"date": "2021-06-07",
...
}
},
"changed": false
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74921
|
https://github.com/ansible/ansible/pull/74924
|
cf3a304ce1711ed9a19549fbf58c004acd47c06a
|
d2d45900edc7f52cf3e3685d65f824a445cc8c41
| 2021-06-07T13:40:49Z |
python
| 2021-06-14T13:32:28Z |
test/integration/targets/gathering_facts/test_gathering_facts.yml
|
---
- hosts: facthost7
tags: [ 'fact_negation' ]
connection: local
gather_subset: "!hardware"
gather_facts: no
tasks:
- name: setup with not hardware
setup:
gather_subset:
- "!hardware"
register: not_hardware_facts
- name: min and network test for platform added
hosts: facthost21
tags: [ 'fact_network' ]
connection: local
gather_subset: ["!all", "network"]
gather_facts: yes
tasks:
- name: Test that retrieving network facts works and gets prereqs from platform and distribution
assert:
that:
- 'ansible_default_ipv4|default("UNDEF") != "UNDEF"'
- 'ansible_interfaces|default("UNDEF") != "UNDEF"'
# these are true for linux, but maybe not for other os
- 'ansible_system|default("UNDEF") != "UNDEF"'
- 'ansible_distribution|default("UNDEF") != "UNDEF"'
# we dont really require these but they are in the min set
# - 'ansible_virtualization_role|default("UNDEF") == "UNDEF"'
# - 'ansible_user_id|default("UNDEF") == "UNDEF"'
# - 'ansible_env|default("UNDEF") == "UNDEF"'
# - 'ansible_selinux|default("UNDEF") == "UNDEF"'
# - 'ansible_pkg_mgr|default("UNDEF") == "UNDEF"'
- name: min and hardware test for platform added
hosts: facthost22
tags: [ 'fact_hardware' ]
connection: local
gather_subset: "hardware"
gather_facts: yes
tasks:
- name: debug stuff
debug:
var: hostvars['facthost22']
# we should also collect platform, but not distribution
- name: Test that retrieving hardware facts works and gets prereqs from platform and distribution
when: ansible_system|default("UNDEF") == "Linux"
assert:
# LinuxHardwareCollector requires 'platform' facts
that:
- 'ansible_memory_mb|default("UNDEF") != "UNDEF"'
- 'ansible_default_ipv4|default("UNDEF") == "UNDEF"'
- 'ansible_interfaces|default("UNDEF") == "UNDEF"'
# these are true for linux, but maybe not for other os
# hardware requires 'platform'
- 'ansible_system|default("UNDEF") != "UNDEF"'
- 'ansible_machine|default("UNDEF") != "UNDEF"'
# hardware does not require 'distribution' but it is min set
# - 'ansible_distribution|default("UNDEF") == "UNDEF"'
# we dont really require these but they are in the min set
# - 'ansible_virtualization_role|default("UNDEF") == "UNDEF"'
# - 'ansible_user_id|default("UNDEF") == "UNDEF"'
# - 'ansible_env|default("UNDEF") == "UNDEF"'
# - 'ansible_selinux|default("UNDEF") == "UNDEF"'
# - 'ansible_pkg_mgr|default("UNDEF") == "UNDEF"'
- name: min and service_mgr test for platform added
hosts: facthost23
tags: [ 'fact_service_mgr' ]
connection: local
gather_subset: ["!all", "service_mgr"]
gather_facts: yes
tasks:
- name: Test that retrieving service_mgr facts works and gets prereqs from platform and distribution
assert:
that:
- 'ansible_service_mgr|default("UNDEF") != "UNDEF"'
- 'ansible_default_ipv4|default("UNDEF") == "UNDEF"'
- 'ansible_interfaces|default("UNDEF") == "UNDEF"'
# these are true for linux, but maybe not for other os
- 'ansible_system|default("UNDEF") != "UNDEF"'
- 'ansible_distribution|default("UNDEF") != "UNDEF"'
# we dont really require these but they are in the min set
# - 'ansible_virtualization_role|default("UNDEF") == "UNDEF"'
# - 'ansible_user_id|default("UNDEF") == "UNDEF"'
# - 'ansible_env|default("UNDEF") == "UNDEF"'
# - 'ansible_selinux|default("UNDEF") == "UNDEF"'
# - 'ansible_pkg_mgr|default("UNDEF") == "UNDEF"'
- hosts: facthost0
tags: [ 'fact_min' ]
connection: local
gather_subset: "all"
gather_facts: yes
tasks:
#- setup:
# register: facts
- name: Test that retrieving all facts works
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") != "UNDEF_MOUNT" or ansible_distribution == "MacOSX"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost19
tags: [ 'fact_min' ]
connection: local
gather_facts: no
tasks:
- setup:
filter: "*env*"
# register: fact_results
- name: Test that retrieving all facts filtered to env works
assert:
that:
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- 'ansible_env|default("UNDEF_ENV") != "UNDEF_ENV"'
- hosts: facthost24
tags: [ 'fact_min' ]
connection: local
gather_facts: no
tasks:
- setup:
filter:
- "*env*"
- "*virt*"
- name: Test that retrieving all facts filtered to env and virt works
assert:
that:
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- 'ansible_env|default("UNDEF_ENV") != "UNDEF_ENV"'
- hosts: facthost13
tags: [ 'fact_min' ]
connection: local
gather_facts: no
tasks:
- setup:
filter: "ansible_user_id"
# register: fact_results
- name: Test that retrieving all facts filtered to specific fact ansible_user_id works
assert:
that:
- 'ansible_user_id|default("UNDEF_USER") != "UNDEF_USER"'
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- 'ansible_env|default("UNDEF_ENV") == "UNDEF_ENV"'
- 'ansible_pkg_mgr|default("UNDEF_PKG_MGR") == "UNDEF_PKG_MGR"'
- hosts: facthost11
tags: [ 'fact_min' ]
connection: local
gather_facts: no
tasks:
- setup:
filter: "*"
# register: fact_results
- name: Test that retrieving all facts filtered to splat
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") != "UNDEF_MOUNT" or ansible_distribution == "MacOSX"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost12
tags: [ 'fact_min' ]
connection: local
gather_facts: no
tasks:
- setup:
filter: ""
# register: fact_results
- name: Test that retrieving all facts filtered to empty filter_spec works
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") != "UNDEF_MOUNT" or ansible_distribution == "MacOSX"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost1
tags: [ 'fact_min' ]
connection: local
gather_subset: "!all"
gather_facts: yes
tasks:
- name: Test that only retrieving minimal facts work
assert:
that:
# from the min set, which should still collect
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_env|default("UNDEF_ENV") != "UNDEF_ENV"'
# non min facts that are not collected
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- hosts: facthost2
tags: [ 'fact_network' ]
connection: local
gather_subset: ["!all", "!min", "network"]
gather_facts: yes
tasks:
- name: Test that retrieving network facts work
assert:
that:
- 'ansible_user_id|default("UNDEF") == "UNDEF"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF") == "UNDEF"'
- 'ansible_virtualization_role|default("UNDEF") == "UNDEF"'
- hosts: facthost3
tags: [ 'fact_hardware' ]
connection: local
gather_subset: "hardware"
gather_facts: yes
tasks:
- name: Test that retrieving hardware facts work
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") != "UNDEF_MOUNT" or ansible_distribution == "MacOSX"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- hosts: facthost4
tags: [ 'fact_virtual' ]
connection: local
gather_subset: "virtual"
gather_facts: yes
tasks:
- name: Test that retrieving virtualization facts work
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost5
tags: [ 'fact_comma_string' ]
connection: local
gather_subset: ["virtual", "network"]
gather_facts: yes
tasks:
- name: Test that retrieving virtualization and network as a string works
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost6
tags: [ 'fact_yaml_list' ]
connection: local
gather_subset:
- virtual
- network
gather_facts: yes
tasks:
- name: Test that retrieving virtualization and network as a string works
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") != "UNDEF_VIRT"'
- hosts: facthost7
tags: [ 'fact_negation' ]
connection: local
gather_subset: "!hardware"
gather_facts: yes
tasks:
- name: Test that negation of fact subsets work
assert:
that:
# network, not collected since it is not in min
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
# not collecting virt, should be undef
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
# mounts/devices are collected by hardware, so should be not collected and undef
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_devices|default("UNDEF_DEVICES") == "UNDEF_DEVICES"'
# from the min set, which should still collect
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_env|default("UNDEF_ENV") != "UNDEF_ENV"'
- hosts: facthost8
tags: [ 'fact_mixed_negation_addition' ]
connection: local
gather_subset: ["!hardware", "network"]
gather_facts: yes
tasks:
- name: Test that negation and additional subsets work together
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- hosts: facthost14
tags: [ 'fact_mixed_negation_addition_min' ]
connection: local
gather_subset: ["!all", "!min", "network"]
gather_facts: yes
tasks:
- name: Test that negation and additional subsets work together for min subset
assert:
that:
- 'ansible_user_id|default("UNDEF_MIN") == "UNDEF_MIN"'
- 'ansible_interfaces|default("UNDEF_NET") != "UNDEF_NET"'
- 'ansible_default_ipv4|default("UNDEF_DEFAULT_IPV4") != "UNDEF_DEFAULT_IPV4"'
- 'ansible_all_ipv4_addresses|default("UNDEF_ALL_IPV4") != "UNDEF_ALL_IPV4"'
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
- 'ansible_env|default("UNDEF_ENV") == "UNDEF_ENV"'
- hosts: facthost15
tags: [ 'fact_negate_all_min_add_pkg_mgr' ]
connection: local
gather_subset: ["!all", "!min", "pkg_mgr"]
gather_facts: yes
tasks:
- name: Test that negation and additional subsets work together for min subset
assert:
that:
# network, not collected since it is not in min
- 'ansible_interfaces|default("UNDEF_NET") == "UNDEF_NET"'
# not collecting virt, should be undef
- 'ansible_virtualization_role|default("UNDEF_VIRT") == "UNDEF_VIRT"'
# mounts/devices are collected by hardware, so should be not collected and undef
- 'ansible_mounts|default("UNDEF_MOUNT") == "UNDEF_MOUNT"'
- 'ansible_devices|default("UNDEF_DEVICES") == "UNDEF_DEVICES"'
# from the min set, which should not collect
- 'ansible_user_id|default("UNDEF_MIN") == "UNDEF_MIN"'
- 'ansible_env|default("UNDEF_ENV") == "UNDEF_ENV"'
# the pkg_mgr fact we requested explicitly
- 'ansible_pkg_mgr|default("UNDEF_PKG_MGR") != "UNDEF_PKG_MGR"'
- hosts: facthost9
tags: [ 'fact_local']
connection: local
gather_facts: no
tasks:
- name: Create fact directories
become: true
with_items:
- /etc/ansible/facts.d
- /tmp/custom_facts.d
file:
state: directory
path: "{{ item }}"
mode: '0777'
- name: Deploy local facts
with_items:
- path: /etc/ansible/facts.d/testfact.fact
content: '{ "fact_dir": "default" }'
- path: /tmp/custom_facts.d/testfact.fact
content: '{ "fact_dir": "custom" }'
copy:
dest: "{{ item.path }}"
content: "{{ item.content }}"
- hosts: facthost9
tags: [ 'fact_local']
connection: local
gather_facts: yes
tasks:
- name: Test reading facts from default fact_path
assert:
that:
- '"{{ ansible_local.testfact.fact_dir }}" == "default"'
- hosts: facthost9
tags: [ 'fact_local']
connection: local
gather_facts: yes
fact_path: /tmp/custom_facts.d
tasks:
- name: Test reading facts from custom fact_path
assert:
that:
- '"{{ ansible_local.testfact.fact_dir }}" == "custom"'
- hosts: facthost20
tags: [ 'fact_facter_ohai' ]
connection: local
gather_subset:
- facter
- ohai
gather_facts: yes
tasks:
- name: Test that retrieving facter and ohai doesnt fail
assert:
# not much to assert here, aside from not crashing, since test images dont have
# facter/ohai
that:
- 'ansible_user_id|default("UNDEF_MIN") != "UNDEF_MIN"'
- hosts: facthost9
tags: [ 'fact_file_utils' ]
connection: local
gather_facts: false
tasks:
- block:
- name: Ensure get_file_content works when strip=False
file_utils:
test: strip
- assert:
that:
- ansible_facts.get('etc_passwd_newlines', 0) + 1 == ansible_facts.get('etc_passwd_newlines_unstripped', 0)
- name: Make an empty file
file:
path: "{{ output_dir }}/empty_file"
state: touch
- name: Ensure get_file_content gives default when file is empty
file_utils:
test: default
touch_file: "{{ output_dir }}/empty_file"
- assert:
that:
- ansible_facts.get('touch_default') == 'i am a default'
- copy:
dest: "{{ output_dir }}/1charsep"
content: "foo:bar:baz:buzz:"
- copy:
dest: "{{ output_dir }}/2charsep"
content: "foo::bar::baz::buzz::"
- name: Ensure get_file_lines works as expected with specified 1-char line_sep
file_utils:
test: line_sep
line_sep_file: "{{ output_dir }}/1charsep"
line_sep_sep: ":"
- assert:
that:
- ansible_facts.get('line_sep') == ['foo', 'bar', 'baz', 'buzz']
- name: Ensure get_file_lines works as expected with specified 1-char line_sep
file_utils:
test: line_sep
line_sep_file: "{{ output_dir }}/2charsep"
line_sep_sep: "::"
- assert:
that:
- ansible_facts.get('line_sep') == ['foo', 'bar', 'baz', 'buzz', '']
- name: Ensure get_mount_size fails gracefully
file_utils:
test: invalid_mountpoint
- assert:
that:
- ansible_facts['invalid_mountpoint']|length == 0
always:
- name: Remove test files
file:
path: "{{ item }}"
state: absent
with_items:
- "{{ output_dir }}/empty_file"
- "{{ output_dir }}/1charsep"
- "{{ output_dir }}/2charsep"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,916 |
implicit async_status during async does not work with `become: true`
|
### Summary
I'm running a task with both `async:` and `become: true` (`become_method` is `sudo`). The implicit `async_status` always fails to retrieve the status since it cannot access the status file, which is owned by the privileged user.
This is very likely due to #74837, since now `async_status` no longer executes a module on the target, but tries to use the connection plugin to retrieve the status file. Since the connection plugin does not use the become plugin, it is not able to read the status file.
CC @bcoca
### Issue Type
Bug Report
### Component Name
async_status
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
Linux
### Steps to Reproduce
```yaml (paste below)
- name: Run database update
command: sleep 1m
become_user: root
become: true
async: 10
poll: 20
```
### Expected Results
After one minute, the task completes.
### Actual Results
```console
<x.x.x.x> FETCH /root/.ansible_async/115781608644.27380 TO /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
<x.x.x.x> SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]'
[WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> SSH: EXEC scp -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]:/root/.ansible_async/115781608644.27380' /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
[WARNING]: scp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: dybuster_admin_ansible
<x.x.x.x> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 x.x.x.x 'dd if=/root/.ansible_async/115781608644.27380 bs=65536'
<x.x.x.x> (1, b'', b"dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied\n")
<x.x.x.x> Failed to connect to the host via ssh: dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
[WARNING]: piped transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ssh_retry: attempt: 1, caught exception(failed to transfer file to /root/.ansible_async/115781608644.27380 /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953:
dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
) from cmd (/root/.ansible_async/115781608644.27380...), pausing for 0 seconds
```
The `dd` method is then tried again and again, always with `dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74916
|
https://github.com/ansible/ansible/pull/74931
|
97acb0f470471c9dcf1e357f1672127f146240a8
|
77e936bd509a179cbb168b9fd0b318f0e27295ce
| 2021-06-05T12:29:52Z |
python
| 2021-06-14T20:39:59Z |
changelogs/fragments/async_status.yml
|
minor_changes:
- async_status no longer requires a module for non windows targets.
- task_executor, Actions using AnsibleActionFail/Skip will now propagate 'results' if given.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,916 |
implicit async_status during async does not work with `become: true`
|
### Summary
I'm running a task with both `async:` and `become: true` (`become_method` is `sudo`). The implicit `async_status` always fails to retrieve the status since it cannot access the status file, which is owned by the privileged user.
This is very likely due to #74837, since now `async_status` no longer executes a module on the target, but tries to use the connection plugin to retrieve the status file. Since the connection plugin does not use the become plugin, it is not able to read the status file.
CC @bcoca
### Issue Type
Bug Report
### Component Name
async_status
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
Linux
### Steps to Reproduce
```yaml (paste below)
- name: Run database update
command: sleep 1m
become_user: root
become: true
async: 10
poll: 20
```
### Expected Results
After one minute, the task completes.
### Actual Results
```console
<x.x.x.x> FETCH /root/.ansible_async/115781608644.27380 TO /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
<x.x.x.x> SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]'
[WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> SSH: EXEC scp -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]:/root/.ansible_async/115781608644.27380' /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
[WARNING]: scp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: dybuster_admin_ansible
<x.x.x.x> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 x.x.x.x 'dd if=/root/.ansible_async/115781608644.27380 bs=65536'
<x.x.x.x> (1, b'', b"dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied\n")
<x.x.x.x> Failed to connect to the host via ssh: dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
[WARNING]: piped transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ssh_retry: attempt: 1, caught exception(failed to transfer file to /root/.ansible_async/115781608644.27380 /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953:
dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
) from cmd (/root/.ansible_async/115781608644.27380...), pausing for 0 seconds
```
The `dd` method is then tried again and again, always with `dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74916
|
https://github.com/ansible/ansible/pull/74931
|
97acb0f470471c9dcf1e357f1672127f146240a8
|
77e936bd509a179cbb168b9fd0b318f0e27295ce
| 2021-06-05T12:29:52Z |
python
| 2021-06-14T20:39:59Z |
changelogs/fragments/async_status_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,916 |
implicit async_status during async does not work with `become: true`
|
### Summary
I'm running a task with both `async:` and `become: true` (`become_method` is `sudo`). The implicit `async_status` always fails to retrieve the status since it cannot access the status file, which is owned by the privileged user.
This is very likely due to #74837, since now `async_status` no longer executes a module on the target, but tries to use the connection plugin to retrieve the status file. Since the connection plugin does not use the become plugin, it is not able to read the status file.
CC @bcoca
### Issue Type
Bug Report
### Component Name
async_status
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
Linux
### Steps to Reproduce
```yaml (paste below)
- name: Run database update
command: sleep 1m
become_user: root
become: true
async: 10
poll: 20
```
### Expected Results
After one minute, the task completes.
### Actual Results
```console
<x.x.x.x> FETCH /root/.ansible_async/115781608644.27380 TO /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
<x.x.x.x> SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]'
[WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> SSH: EXEC scp -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]:/root/.ansible_async/115781608644.27380' /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
[WARNING]: scp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: dybuster_admin_ansible
<x.x.x.x> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 x.x.x.x 'dd if=/root/.ansible_async/115781608644.27380 bs=65536'
<x.x.x.x> (1, b'', b"dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied\n")
<x.x.x.x> Failed to connect to the host via ssh: dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
[WARNING]: piped transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ssh_retry: attempt: 1, caught exception(failed to transfer file to /root/.ansible_async/115781608644.27380 /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953:
dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
) from cmd (/root/.ansible_async/115781608644.27380...), pausing for 0 seconds
```
The `dd` method is then tried again and again, always with `dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74916
|
https://github.com/ansible/ansible/pull/74931
|
97acb0f470471c9dcf1e357f1672127f146240a8
|
77e936bd509a179cbb168b9fd0b318f0e27295ce
| 2021-06-05T12:29:52Z |
python
| 2021-06-14T20:39:59Z |
lib/ansible/modules/async_status.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>, and others
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: async_status
short_description: Obtain status of asynchronous task
description:
- This module gets the status of an asynchronous task.
- This module is also supported for Windows targets.
version_added: "0.5"
options:
jid:
description:
- Job or task identifier
type: str
required: true
mode:
description:
- If C(status), obtain the status.
- If C(cleanup), clean up the async job cache (by default in C(~/.ansible_async/)) for the specified job I(jid).
type: str
choices: [ cleanup, status ]
default: status
notes:
- This module is also supported for Windows targets.
seealso:
- ref: playbooks_async
description: Detailed information on how to use asynchronous actions and polling.
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
---
- name: Asynchronous yum task
yum:
name: docker-io
state: present
async: 1000
poll: 0
register: yum_sleeper
- name: Wait for asynchronous job to end
async_status:
jid: '{{ yum_sleeper.ansible_job_id }}'
register: job_result
until: job_result.finished
retries: 100
delay: 10
'''
RETURN = r'''
ansible_job_id:
description: The asynchronous job id
returned: success
type: str
sample: '360874038559.4169'
finished:
description: Whether the asynchronous job has finished (C(1)) or not (C(0))
returned: always
type: int
sample: 1
started:
description: Whether the asynchronous job has started (C(1)) or not (C(0))
returned: always
type: int
sample: 1
stdout:
description: Any output returned by async_wrapper
returned: always
type: str
stderr:
description: Any errors returned by async_wrapper
returned: always
type: str
erased:
description: Path to erased job file
returned: when file is erased
type: str
'''
import json
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
def main():
module = AnsibleModule(argument_spec=dict(
jid=dict(type='str', required=True),
mode=dict(type='str', default='status', choices=['cleanup', 'status']),
# passed in from the async_status action plugin
_async_dir=dict(type='path', required=True),
))
module.deprecate("The async_status module should not be called directly anymore, use the action plugin instead", version='2.16')
mode = module.params['mode']
jid = module.params['jid']
async_dir = module.params['_async_dir']
# setup logging directory
logdir = os.path.expanduser(async_dir)
log_path = os.path.join(logdir, jid)
if not os.path.exists(log_path):
module.fail_json(msg="could not find job", ansible_job_id=jid, started=1, finished=1)
if mode == 'cleanup':
os.unlink(log_path)
module.exit_json(ansible_job_id=jid, erased=log_path)
# NOT in cleanup mode, assume regular status mode
# no remote kill mode currently exists, but probably should
# consider log_path + ".pid" file and also unlink that above
data = None
try:
with open(log_path) as f:
data = json.loads(f.read())
except Exception:
if not data:
# file not written yet? That means it is running
module.exit_json(results_file=log_path, ansible_job_id=jid, started=1, finished=0)
else:
module.fail_json(ansible_job_id=jid, results_file=log_path,
msg="Could not parse job output: %s" % data, started=1, finished=1)
if 'started' not in data:
data['finished'] = 1
data['ansible_job_id'] = jid
elif 'finished' not in data:
data['finished'] = 0
# Fix error: TypeError: exit_json() keywords must be strings
data = dict([(to_native(k), v) for k, v in iteritems(data)])
module.exit_json(**data)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,916 |
implicit async_status during async does not work with `become: true`
|
### Summary
I'm running a task with both `async:` and `become: true` (`become_method` is `sudo`). The implicit `async_status` always fails to retrieve the status since it cannot access the status file, which is owned by the privileged user.
This is very likely due to #74837, since now `async_status` no longer executes a module on the target, but tries to use the connection plugin to retrieve the status file. Since the connection plugin does not use the become plugin, it is not able to read the status file.
CC @bcoca
### Issue Type
Bug Report
### Component Name
async_status
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
Linux
### Steps to Reproduce
```yaml (paste below)
- name: Run database update
command: sleep 1m
become_user: root
become: true
async: 10
poll: 20
```
### Expected Results
After one minute, the task completes.
### Actual Results
```console
<x.x.x.x> FETCH /root/.ansible_async/115781608644.27380 TO /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
<x.x.x.x> SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]'
[WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> SSH: EXEC scp -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]:/root/.ansible_async/115781608644.27380' /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
[WARNING]: scp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: dybuster_admin_ansible
<x.x.x.x> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 x.x.x.x 'dd if=/root/.ansible_async/115781608644.27380 bs=65536'
<x.x.x.x> (1, b'', b"dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied\n")
<x.x.x.x> Failed to connect to the host via ssh: dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
[WARNING]: piped transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ssh_retry: attempt: 1, caught exception(failed to transfer file to /root/.ansible_async/115781608644.27380 /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953:
dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
) from cmd (/root/.ansible_async/115781608644.27380...), pausing for 0 seconds
```
The `dd` method is then tried again and again, always with `dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74916
|
https://github.com/ansible/ansible/pull/74931
|
97acb0f470471c9dcf1e357f1672127f146240a8
|
77e936bd509a179cbb168b9fd0b318f0e27295ce
| 2021-06-05T12:29:52Z |
python
| 2021-06-14T20:39:59Z |
lib/ansible/plugins/action/async_status.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import tempfile
import time
from ansible.constants import config
from ansible.errors import AnsibleError, AnsibleActionFail, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
from ansible.plugins.action import ActionBase
from ansible.utils.vars import merge_hash
class ActionModule(ActionBase):
_VALID_ARGS = frozenset(('jid', 'mode'))
def _get_async_dir(self):
# async directory based on the shell option
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
# for backwards compatibility we need to get the dir from
# ANSIBLE_ASYNC_DIR that is defined in the environment. This is
# deprecated and will be removed in favour of shell options
env_async_dir = [e for e in self._task.environment if "ANSIBLE_ASYNC_DIR" in e]
if len(env_async_dir) > 0:
async_dir = env_async_dir[0]['ANSIBLE_ASYNC_DIR']
msg = "Setting the async dir from the environment keyword " \
"ANSIBLE_ASYNC_DIR is deprecated. Set the async_dir " \
"shell option instead"
self._display.deprecated(msg, "2.12", collection_name='ansible.builtin')
return self._remote_expand_user(async_dir)
def _update_results_with_job_file(self, jid, log_path, results):
# local tempfile to copy job file to, using local tmp which is auto cleaned on exit
fd, tmpfile = tempfile.mkstemp(prefix='_async_%s' % jid, dir=config.get_config_value('DEFAULT_LOCAL_TMP'))
attempts = 0
while True:
try:
self._connection.fetch_file(log_path, tmpfile)
except AnsibleConnectionFailure:
raise
except AnsibleFileNotFound as e:
if attempts > 3:
raise AnsibleActionFail("Could not find job file on remote: %s" % to_native(e), orig_exc=e, result=results)
except AnsibleError as e:
if attempts > 3:
raise AnsibleActionFail("Could not fetch the job file from remote: %s" % to_native(e), orig_exc=e, result=results)
try:
with open(tmpfile) as f:
file_data = f.read()
except (IOError, OSError):
pass
if file_data:
break
elif attempts > 3:
raise AnsibleActionFail("Unable to fetch a usable job file", result=results)
attempts += 1
time.sleep(attempts * 0.2)
try:
data = json.loads(file_data)
except Exception:
results['finished'] = 1
results['failed'] = True
results['msg'] = "Could not parse job output: %s" % to_native(file_data, errors='surrogate_or_strict')
if 'started' not in data:
data['finished'] = 1
data['ansible_job_id'] = jid
results.update(dict([(to_native(k), v) for k, v in iteritems(data)]))
def run(self, tmp=None, task_vars=None):
results = super(ActionModule, self).run(tmp, task_vars)
# initialize response
results['started'] = results['finished'] = 0
results['stdout'] = results['stderr'] = ''
results['stdout_lines'] = results['stderr_lines'] = []
# read params
try:
jid = self._task.args["jid"]
except KeyError:
raise AnsibleActionFail("jid is required", result=results)
mode = self._task.args.get("mode", "status")
results['ansible_job_id'] = jid
async_dir = self._get_async_dir()
log_path = self._connection._shell.join_path(async_dir, jid)
if mode == 'cleanup':
self._remove_tmp_path(log_path, force=True)
results['erased'] = log_path
else:
results['results_file'] = log_path
results['started'] = 1
if getattr(self._connection._shell, '_IS_WINDOWS', False):
# TODO: eventually fix so we can get remote user (%USERPROFILE%) like we get ~/ for posix
module_args = dict(jid=jid, mode=mode, _async_dir=async_dir)
results = merge_hash(results, self._execute_module(module_name='ansible.legacy.async_status', task_vars=task_vars, module_args=module_args))
else:
# fetch remote file and read locally
self._update_results_with_job_file(jid, log_path, results)
return results
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,916 |
implicit async_status during async does not work with `become: true`
|
### Summary
I'm running a task with both `async:` and `become: true` (`become_method` is `sudo`). The implicit `async_status` always fails to retrieve the status since it cannot access the status file, which is owned by the privileged user.
This is very likely due to #74837, since now `async_status` no longer executes a module on the target, but tries to use the connection plugin to retrieve the status file. Since the connection plugin does not use the become plugin, it is not able to read the status file.
CC @bcoca
### Issue Type
Bug Report
### Component Name
async_status
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
Linux
### Steps to Reproduce
```yaml (paste below)
- name: Run database update
command: sleep 1m
become_user: root
become: true
async: 10
poll: 20
```
### Expected Results
After one minute, the task completes.
### Actual Results
```console
<x.x.x.x> FETCH /root/.ansible_async/115781608644.27380 TO /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
<x.x.x.x> SSH: EXEC sftp -b - -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]'
[WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> SSH: EXEC scp -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 '[x.x.x.x]:/root/.ansible_async/115781608644.27380' /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953
[WARNING]: scp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: dybuster_admin_ansible
<x.x.x.x> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=300s -o StrictHostKeyChecking=no -o 'IdentityFile="/path/to/key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="dybuster_admin_ansible"' -o ConnectTimeout=20 -o ControlPath=/home/me/.ansible/cp/8bc7ae9d88 x.x.x.x 'dd if=/root/.ansible_async/115781608644.27380 bs=65536'
<x.x.x.x> (1, b'', b"dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied\n")
<x.x.x.x> Failed to connect to the host via ssh: dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
[WARNING]: piped transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information
<x.x.x.x> ssh_retry: attempt: 1, caught exception(failed to transfer file to /root/.ansible_async/115781608644.27380 /home/me/.ansible/tmp/ansible-local-389716fh7xxnq9/_async_115781608644.27380c_or_953:
dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied
) from cmd (/root/.ansible_async/115781608644.27380...), pausing for 0 seconds
```
The `dd` method is then tried again and again, always with `dd: failed to open '/root/.ansible_async/115781608644.27380': Permission denied`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74916
|
https://github.com/ansible/ansible/pull/74931
|
97acb0f470471c9dcf1e357f1672127f146240a8
|
77e936bd509a179cbb168b9fd0b318f0e27295ce
| 2021-06-05T12:29:52Z |
python
| 2021-06-14T20:39:59Z |
test/integration/targets/async/tasks/main.yml
|
# test code for the async keyword
# (c) 2014, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: run a 2 second loop
shell: for i in $(seq 1 2); do echo $i ; sleep 1; done;
async: 10
poll: 1
register: async_result
- debug: var=async_result
- name: validate async returns
assert:
that:
- "'ansible_job_id' in async_result"
- "'changed' in async_result"
- "'cmd' in async_result"
- "'delta' in async_result"
- "'end' in async_result"
- "'rc' in async_result"
- "'start' in async_result"
- "'stderr' in async_result"
- "'stdout' in async_result"
- "'stdout_lines' in async_result"
- async_result.rc == 0
- async_result.finished == 1
- async_result is finished
- name: assert temp async directory exists
stat:
path: "~/.ansible_async"
register: dir_st
- assert:
that:
- dir_st.stat.isdir is defined and dir_st.stat.isdir
- name: stat temp async status file
stat:
path: "~/.ansible_async/{{ async_result.ansible_job_id }}"
register: tmp_async_file_st
- name: validate automatic cleanup of temp async status file on completed run
assert:
that:
- not tmp_async_file_st.stat.exists
- name: test async without polling
command: sleep 5
async: 30
poll: 0
register: async_result
- debug: var=async_result
- name: validate async without polling returns
assert:
that:
- "'ansible_job_id' in async_result"
- "'started' in async_result"
- async_result.finished == 0
- async_result is not finished
- name: test skipped task handling
command: /bin/true
async: 15
poll: 0
when: False
# test async "fire and forget, but check later"
- name: 'start a task with "fire-and-forget"'
command: sleep 3
async: 30
poll: 0
register: fnf_task
- name: assert task was successfully started
assert:
that:
- fnf_task.started == 1
- fnf_task is started
- "'ansible_job_id' in fnf_task"
- name: 'check on task started as a "fire-and-forget"'
async_status: jid={{ fnf_task.ansible_job_id }}
register: fnf_result
until: fnf_result is finished
retries: 10
delay: 1
- name: assert task was successfully checked
assert:
that:
- fnf_result.finished
- fnf_result is finished
- name: test graceful module failure
async_test:
fail_mode: graceful
async: 30
poll: 1
register: async_result
ignore_errors: true
- name: assert task failed correctly
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result is not changed
- async_result is failed
- async_result.msg == 'failed gracefully'
- name: test exception module failure
async_test:
fail_mode: exception
async: 5
poll: 1
register: async_result
ignore_errors: true
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == false
- async_result is not changed
- async_result.failed == true
- async_result is failed
- async_result.stderr is search('failing via exception', multiline=True)
- name: test leading junk before JSON
async_test:
fail_mode: leading_junk
async: 5
poll: 1
register: async_result
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == true
- async_result is changed
- async_result is successful
- name: test trailing junk after JSON
async_test:
fail_mode: trailing_junk
async: 5
poll: 1
register: async_result
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == true
- async_result is changed
- async_result is successful
- async_result.warnings[0] is search('trailing junk after module output')
- name: test stderr handling
async_test:
fail_mode: stderr
async: 30
poll: 1
register: async_result
ignore_errors: true
- assert:
that:
- async_result.stderr == "printed to stderr\n"
# NOTE: This should report a warning that cannot be tested
- name: test async properties on non-async task
command: sleep 1
register: non_async_result
- name: validate response
assert:
that:
- non_async_result is successful
- non_async_result is changed
- non_async_result is finished
- "'ansible_job_id' not in non_async_result"
- name: set fact of custom tmp dir
set_fact:
custom_async_tmp: ~/.ansible_async_test
- name: ensure custom async tmp dir is absent
file:
path: '{{ custom_async_tmp }}'
state: absent
- block:
- name: run async task with custom dir
command: sleep 1
register: async_custom_dir
async: 5
poll: 1
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: check if the async temp dir is created
stat:
path: '{{ custom_async_tmp }}'
register: async_custom_dir_result
- name: assert run async task with custom dir
assert:
that:
- async_custom_dir is successful
- async_custom_dir is finished
- async_custom_dir_result.stat.exists
- name: remove custom async dir again
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: run async task with custom dir - deprecated format
command: sleep 1
register: async_custom_dir_dep
async: 5
poll: 1
environment:
ANSIBLE_ASYNC_DIR: '{{ custom_async_tmp }}'
- name: check if the async temp dir is created - deprecated format
stat:
path: '{{ custom_async_tmp }}'
register: async_custom_dir_dep_result
- name: assert run async task with custom dir - deprecated format
assert:
that:
- async_custom_dir_dep is successful
- async_custom_dir_dep is finished
- async_custom_dir_dep_result.stat.exists
- name: remove custom async dir after deprecation test
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: run fire and forget async task with custom dir
command: echo moo
register: async_fandf_custom_dir
async: 5
poll: 0
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: fail to get async status with custom dir with defaults
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_fail
ignore_errors: yes
- name: get async status with custom dir using newer format
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_result
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: get async status with custom dir - deprecated format
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_dep_result
environment:
ANSIBLE_ASYNC_DIR: '{{ custom_async_tmp }}'
- name: assert run fire and forget async task with custom dir
assert:
that:
- async_fandf_custom_dir is successful
- async_fandf_custom_dir_fail is failed
- async_fandf_custom_dir_fail.msg.startswith("Could not find job file on remote")
- async_fandf_custom_dir_result is successful
- async_fandf_custom_dir_dep_result is successful
always:
- name: remove custom tmp dir after test
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: Test that async has stdin
command: >
{{ ansible_python_interpreter|default('/usr/bin/python') }} -c 'import os; os.fdopen(os.dup(0), "r")'
async: 1
poll: 1
- name: run async poll callback test playbook
command: ansible-playbook {{ role_path }}/callback_test.yml
delegate_to: localhost
register: callback_output
- assert:
that:
- '"ASYNC POLL on localhost" in callback_output.stdout'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,070 |
Getting 'argument_error' when using interpolation in the playbook
|
### Summary
Trying to run a playbook where we set one of the role parameters using string interpolation causes the argument spec to fail validation.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible 4.1.0
ansible-core 2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
ANY_ERRORS_FATAL(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = error
COLLECTIONS_PATHS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/collections']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/callback_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/filter_plugins']
DEFAULT_FORCE_HANDLERS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/inventory_plugins']
DEFAULT_JINJA2_NATIVE(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/lookup_plugins']
DEFAULT_MODULE_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles/*/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored/*/library']
DEFAULT_PRIVATE_ROLE_VARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles']
DEFAULT_TIMEOUT(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = 10
DEPRECATION_WARNINGS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DIFF_ALWAYS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
INTERPRETER_PYTHON(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = auto_legacy_silent
RETRY_FILES_ENABLED(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = never
```
### OS / Environment
macOS 11.4
### Steps to Reproduce
Consider this playbook and example role, the role has an option which accepts a list of dictionaries, one of the valid keys is `state`, which we set from a var in the playbook, defaulting to `absent`, but can be set when running the playbook for example `-e state=present`:
```yaml (paste below)
vars:
state: absent
roles:
- role: rolename
my_role_parameters:
- state: "{{ state }}"
```
The role argument spec for this looks like the following:
```
argument_specs:
main:
short_description: Bla bla
author: Ryan Conway
options:
my_role_parameters:
description: bla bla bla
type: list
elements: dict
options:
state:
description: bla bla bla
type: str
required: false
default: present
choices:
- present
- absent
- started
- stopped
```
### Expected Results
When the `state` variable contains one of the value options, the role argument spec validation should pass.
### Actual Results
```console
FAILED! => {"argument_errors": ["value of state must be one of: present, absent, started, stopped, got: {{ state }} found in my_role_parameters"]
```
it looks like the "{{ state }}" interpolation does not get interpreted before the role argument spec is validated, so it treats it as a string literal? Or have I messed up the role specification?
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75070
|
https://github.com/ansible/ansible/pull/75073
|
3a8fc2d2be9062d4efb152be8c0c7fb6918fc587
|
ca6123e0ee0707b4cdf74137b5778fd913da8357
| 2021-06-21T14:52:53Z |
python
| 2021-06-22T15:24:02Z |
changelogs/fragments/75073-role-argspec-suboption-variables.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,070 |
Getting 'argument_error' when using interpolation in the playbook
|
### Summary
Trying to run a playbook where we set one of the role parameters using string interpolation causes the argument spec to fail validation.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible 4.1.0
ansible-core 2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
ANY_ERRORS_FATAL(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = error
COLLECTIONS_PATHS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/collections']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/callback_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/filter_plugins']
DEFAULT_FORCE_HANDLERS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/inventory_plugins']
DEFAULT_JINJA2_NATIVE(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/lookup_plugins']
DEFAULT_MODULE_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles/*/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored/*/library']
DEFAULT_PRIVATE_ROLE_VARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles']
DEFAULT_TIMEOUT(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = 10
DEPRECATION_WARNINGS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DIFF_ALWAYS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
INTERPRETER_PYTHON(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = auto_legacy_silent
RETRY_FILES_ENABLED(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = never
```
### OS / Environment
macOS 11.4
### Steps to Reproduce
Consider this playbook and example role, the role has an option which accepts a list of dictionaries, one of the valid keys is `state`, which we set from a var in the playbook, defaulting to `absent`, but can be set when running the playbook for example `-e state=present`:
```yaml (paste below)
vars:
state: absent
roles:
- role: rolename
my_role_parameters:
- state: "{{ state }}"
```
The role argument spec for this looks like the following:
```
argument_specs:
main:
short_description: Bla bla
author: Ryan Conway
options:
my_role_parameters:
description: bla bla bla
type: list
elements: dict
options:
state:
description: bla bla bla
type: str
required: false
default: present
choices:
- present
- absent
- started
- stopped
```
### Expected Results
When the `state` variable contains one of the value options, the role argument spec validation should pass.
### Actual Results
```console
FAILED! => {"argument_errors": ["value of state must be one of: present, absent, started, stopped, got: {{ state }} found in my_role_parameters"]
```
it looks like the "{{ state }}" interpolation does not get interpreted before the role argument spec is validated, so it treats it as a string literal? Or have I messed up the role specification?
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75070
|
https://github.com/ansible/ansible/pull/75073
|
3a8fc2d2be9062d4efb152be8c0c7fb6918fc587
|
ca6123e0ee0707b4cdf74137b5778fd913da8357
| 2021-06-21T14:52:53Z |
python
| 2021-06-22T15:24:02Z |
lib/ansible/plugins/action/validate_argument_spec.py
|
# Copyright 2021 Red Hat
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleError
from ansible.plugins.action import ActionBase
from ansible.module_utils.six import iteritems, string_types
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.errors import AnsibleValidationErrorMultiple
class ActionModule(ActionBase):
''' Validate an arg spec'''
TRANSFERS_FILES = False
def get_args_from_task_vars(self, argument_spec, task_vars):
'''
Get any arguments that may come from `task_vars`.
Expand templated variables so we can validate the actual values.
:param argument_spec: A dict of the argument spec.
:param task_vars: A dict of task variables.
:returns: A dict of values that can be validated against the arg spec.
'''
args = {}
for argument_name, argument_attrs in iteritems(argument_spec):
if argument_name in task_vars:
if isinstance(task_vars[argument_name], string_types):
value = self._templar.do_template(task_vars[argument_name])
if value:
args[argument_name] = value
else:
args[argument_name] = task_vars[argument_name]
return args
def run(self, tmp=None, task_vars=None):
'''
Validate an argument specification against a provided set of data.
The `validate_argument_spec` module expects to receive the arguments:
- argument_spec: A dict whose keys are the valid argument names, and
whose values are dicts of the argument attributes (type, etc).
- provided_arguments: A dict whose keys are the argument names, and
whose values are the argument value.
:param tmp: Deprecated. Do not use.
:param task_vars: A dict of task variables.
:return: An action result dict, including a 'argument_errors' key with a
list of validation errors found.
'''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# This action can be called from anywhere, so pass in some info about what it is
# validating args for so the error results make some sense
result['validate_args_context'] = self._task.args.get('validate_args_context', {})
if 'argument_spec' not in self._task.args:
raise AnsibleError('"argument_spec" arg is required in args: %s' % self._task.args)
# Get the task var called argument_spec. This will contain the arg spec
# data dict (for the proper entry point for a role).
argument_spec_data = self._task.args.get('argument_spec')
# the values that were passed in and will be checked against argument_spec
provided_arguments = self._task.args.get('provided_arguments', {})
if not isinstance(argument_spec_data, dict):
raise AnsibleError('Incorrect type for argument_spec, expected dict and got %s' % type(argument_spec_data))
if not isinstance(provided_arguments, dict):
raise AnsibleError('Incorrect type for provided_arguments, expected dict and got %s' % type(provided_arguments))
args_from_vars = self.get_args_from_task_vars(argument_spec_data, task_vars)
provided_arguments.update(args_from_vars)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
if validation_result.error_messages:
result['failed'] = True
result['msg'] = 'Validation of arguments failed:\n%s' % '\n'.join(validation_result.error_messages)
result['argument_spec_data'] = argument_spec_data
result['argument_errors'] = validation_result.error_messages
return result
result['changed'] = False
result['msg'] = 'The arg spec validation passed'
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,070 |
Getting 'argument_error' when using interpolation in the playbook
|
### Summary
Trying to run a playbook where we set one of the role parameters using string interpolation causes the argument spec to fail validation.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible 4.1.0
ansible-core 2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
ANY_ERRORS_FATAL(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = error
COLLECTIONS_PATHS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/collections']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/callback_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/filter_plugins']
DEFAULT_FORCE_HANDLERS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/inventory_plugins']
DEFAULT_JINJA2_NATIVE(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/lookup_plugins']
DEFAULT_MODULE_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles/*/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored/*/library']
DEFAULT_PRIVATE_ROLE_VARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles']
DEFAULT_TIMEOUT(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = 10
DEPRECATION_WARNINGS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DIFF_ALWAYS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
INTERPRETER_PYTHON(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = auto_legacy_silent
RETRY_FILES_ENABLED(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = never
```
### OS / Environment
macOS 11.4
### Steps to Reproduce
Consider this playbook and example role, the role has an option which accepts a list of dictionaries, one of the valid keys is `state`, which we set from a var in the playbook, defaulting to `absent`, but can be set when running the playbook for example `-e state=present`:
```yaml (paste below)
vars:
state: absent
roles:
- role: rolename
my_role_parameters:
- state: "{{ state }}"
```
The role argument spec for this looks like the following:
```
argument_specs:
main:
short_description: Bla bla
author: Ryan Conway
options:
my_role_parameters:
description: bla bla bla
type: list
elements: dict
options:
state:
description: bla bla bla
type: str
required: false
default: present
choices:
- present
- absent
- started
- stopped
```
### Expected Results
When the `state` variable contains one of the value options, the role argument spec validation should pass.
### Actual Results
```console
FAILED! => {"argument_errors": ["value of state must be one of: present, absent, started, stopped, got: {{ state }} found in my_role_parameters"]
```
it looks like the "{{ state }}" interpolation does not get interpreted before the role argument spec is validated, so it treats it as a string literal? Or have I messed up the role specification?
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75070
|
https://github.com/ansible/ansible/pull/75073
|
3a8fc2d2be9062d4efb152be8c0c7fb6918fc587
|
ca6123e0ee0707b4cdf74137b5778fd913da8357
| 2021-06-21T14:52:53Z |
python
| 2021-06-22T15:24:02Z |
test/integration/targets/roles_arg_spec/test.yml
|
---
- hosts: localhost
gather_facts: false
roles:
- { role: a, a_str: "roles" }
vars:
INT_VALUE: 42
tasks:
- name: "Valid simple role usage with include_role"
include_role:
name: a
vars:
a_str: "include_role"
- name: "Valid simple role usage with import_role"
import_role:
name: a
vars:
a_str: "import_role"
- name: "Valid role usage (more args)"
include_role:
name: b
vars:
b_str: "xyz"
b_int: 5
b_bool: true
- name: "Valid simple role usage with include_role of different entry point"
include_role:
name: a
tasks_from: "alternate"
vars:
a_int: 256
- name: "Valid simple role usage with import_role of different entry point"
import_role:
name: a
tasks_from: "alternate"
vars:
a_int: 512
- name: "Valid simple role usage with a templated value"
import_role:
name: a
vars:
a_int: "{{ INT_VALUE }}"
- name: "Call role entry point that is defined, but has no spec data"
import_role:
name: a
tasks_from: "no_spec_entrypoint"
- name: "New play to reset vars: Test include_role fails"
hosts: localhost
gather_facts: false
vars:
expected_returned_spec:
b_bool:
required: true
type: "bool"
b_int:
required: true
type: "int"
b_str:
required: true
type: "str"
tasks:
- block:
- name: "Invalid role usage"
include_role:
name: b
vars:
b_bool: 7
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate failure"
assert:
that:
- ansible_failed_task.name == "Validating arguments against arg spec 'main' - Main entry point for role B."
- ansible_failed_result.argument_errors | length == 2
- "'missing required arguments: b_int, b_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "b"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/b')"
- ansible_failed_result.argument_spec_data == expected_returned_spec
- name: "New play to reset vars: Test import_role fails"
hosts: localhost
gather_facts: false
vars:
expected_returned_spec:
b_bool:
required: true
type: "bool"
b_int:
required: true
type: "int"
b_str:
required: true
type: "str"
tasks:
- block:
- name: "Invalid role usage"
import_role:
name: b
vars:
b_bool: 7
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate failure"
assert:
that:
- ansible_failed_task.name == "Validating arguments against arg spec 'main' - Main entry point for role B."
- ansible_failed_result.argument_errors | length == 2
- "'missing required arguments: b_int, b_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "b"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/b')"
- ansible_failed_result.argument_spec_data == expected_returned_spec
- name: "New play to reset vars: Test nested role including/importing role succeeds"
hosts: localhost
gather_facts: false
vars:
c_int: 1
a_str: "some string"
a_int: 42
tasks:
- name: "Test import_role of role C"
import_role:
name: c
- name: "Test include_role of role C"
include_role:
name: c
- name: "New play to reset vars: Test nested role including/importing role fails"
hosts: localhost
gather_facts: false
vars:
main_expected_returned_spec:
a_str:
required: true
type: "str"
alternate_expected_returned_spec:
a_int:
required: true
type: "int"
tasks:
- block:
- name: "Test import_role of role C (missing a_str)"
import_role:
name: c
vars:
c_int: 100
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "a"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/a')"
- ansible_failed_result.argument_spec_data == main_expected_returned_spec
- block:
- name: "Test include_role of role C (missing a_int from `alternate` entry point)"
include_role:
name: c
vars:
c_int: 200
a_str: "some string"
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate include_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_int' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "alternate"
- ansible_failed_result.validate_args_context.name == "a"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/a')"
- ansible_failed_result.argument_spec_data == alternate_expected_returned_spec
- name: "New play to reset vars: Test role with no tasks can fail"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Test import_role of role role_with_no_tasks (missing a_str)"
import_role:
name: role_with_no_tasks
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "role_with_no_tasks"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/role_with_no_tasks')"
- name: "New play to reset vars: Test disabling role validation with rolespec_validate=False"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Test import_role of role C (missing a_str), but validation turned off"
import_role:
name: c
rolespec_validate: False
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# We expect the role to actually run, but will fail because an undefined variable was referenced
# and validation wasn't performed up front (thus not returning 'argument_errors').
- "'argument_errors' not in ansible_failed_result"
- "'The task includes an option with an undefined variable.' in ansible_failed_result.msg"
- name: "New play to reset vars: Test collection-based role"
hosts: localhost
gather_facts: false
tasks:
- name: "Valid collection-based role usage"
import_role:
name: "foo.bar.blah"
vars:
blah_str: "some string"
- name: "New play to reset vars: Test collection-based role will fail"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Invalid collection-based role usage"
import_role:
name: "foo.bar.blah"
- fail:
msg: "Should not get here"
rescue:
- debug: var=ansible_failed_result
- name: "Validate import_role failure for collection-based role"
assert:
that:
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: blah_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "blah"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/collections/ansible_collections/foo/bar/roles/blah')"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,070 |
Getting 'argument_error' when using interpolation in the playbook
|
### Summary
Trying to run a playbook where we set one of the role parameters using string interpolation causes the argument spec to fail validation.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible 4.1.0
ansible-core 2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
ANY_ERRORS_FATAL(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = error
COLLECTIONS_PATHS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/collections']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/callback_plugins']
DEFAULT_FILTER_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/filter_plugins']
DEFAULT_FORCE_HANDLERS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/inventory_plugins']
DEFAULT_JINJA2_NATIVE(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/lookup_plugins']
DEFAULT_MODULE_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles/*/library', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored/*/library']
DEFAULT_PRIVATE_ROLE_VARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = ['/Users/ryan/Desktop/de-ansible-core-upgrade/roles-vendored', '/Users/ryan/Desktop/de-ansible-core-upgrade/roles']
DEFAULT_TIMEOUT(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = 10
DEPRECATION_WARNINGS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
DIFF_ALWAYS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = True
INTERPRETER_PYTHON(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = auto_legacy_silent
RETRY_FILES_ENABLED(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/ryan/Desktop/de-ansible-core-upgrade/ansible.cfg) = never
```
### OS / Environment
macOS 11.4
### Steps to Reproduce
Consider this playbook and example role, the role has an option which accepts a list of dictionaries, one of the valid keys is `state`, which we set from a var in the playbook, defaulting to `absent`, but can be set when running the playbook for example `-e state=present`:
```yaml (paste below)
vars:
state: absent
roles:
- role: rolename
my_role_parameters:
- state: "{{ state }}"
```
The role argument spec for this looks like the following:
```
argument_specs:
main:
short_description: Bla bla
author: Ryan Conway
options:
my_role_parameters:
description: bla bla bla
type: list
elements: dict
options:
state:
description: bla bla bla
type: str
required: false
default: present
choices:
- present
- absent
- started
- stopped
```
### Expected Results
When the `state` variable contains one of the value options, the role argument spec validation should pass.
### Actual Results
```console
FAILED! => {"argument_errors": ["value of state must be one of: present, absent, started, stopped, got: {{ state }} found in my_role_parameters"]
```
it looks like the "{{ state }}" interpolation does not get interpreted before the role argument spec is validated, so it treats it as a string literal? Or have I messed up the role specification?
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75070
|
https://github.com/ansible/ansible/pull/75073
|
3a8fc2d2be9062d4efb152be8c0c7fb6918fc587
|
ca6123e0ee0707b4cdf74137b5778fd913da8357
| 2021-06-21T14:52:53Z |
python
| 2021-06-22T15:24:02Z |
test/integration/targets/roles_arg_spec/test_complex_role_fails.yml
|
---
- name: "Running include_role test1"
hosts: localhost
gather_facts: false
vars:
unicode_type_match: "<type 'unicode'>"
string_type_match: "<type 'str'>"
float_type_match: "<type 'float'>"
unicode_class_match: "<class 'unicode'>"
string_class_match: "<class 'str'>"
bytes_class_match: "<class 'bytes'>"
float_class_match: "<class 'float'>"
expected:
test1_1:
argument_errors: [
"argument 'tidy_expected' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'bust_some_stuff' is of type <class 'str'> and we were unable to convert to int: <class 'str'> cannot be converted to an int",
"argument 'some_list' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'some_dict' is of type <class 'ansible.parsing.yaml.objects.AnsibleSequence'> and we were unable to convert to dict: <class 'ansible.parsing.yaml.objects.AnsibleSequence'> cannot be converted to a dict",
"argument 'some_int' is of type <class 'float'> and we were unable to convert to int: <class 'float'> cannot be converted to an int",
"argument 'some_float' is of type <class 'str'> and we were unable to convert to float: <class 'str'> cannot be converted to a float",
"argument 'some_bytes' is of type <class 'bytes'> and we were unable to convert to bytes: <class 'bytes'> cannot be converted to a Byte value",
"argument 'some_bits' is of type <class 'str'> and we were unable to convert to bits: <class 'str'> cannot be converted to a Bit value",
"value of test1_choices must be one of: this paddle game, the astray, this remote control, the chair, got: My dog",
"value of some_choices must be one of: choice1, choice2, got: choice4",
"argument 'some_second_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'some_dict_options'. and we were unable to convert to bool: The value 'not-a-bool' is not a valid boolean. ",
"argument 'third_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'multi_level_option -> second_level'. and we were unable to convert to int: <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> cannot be converted to an int"
]
tasks:
# This test play requires jinja >= 2.7
- name: get the jinja2 version
shell: python -c 'import jinja2; print(jinja2.__version__)'
register: jinja2_version
delegate_to: localhost
changed_when: false
- debug:
msg: "Jinja version: {{ jinja2_version.stdout }}"
- name: include_role test1 since it has a arg_spec.yml
block:
- include_role:
name: test1
vars:
tidy_expected:
some_key: some_value
test1_var1: 37.4
test1_choices: "My dog"
bust_some_stuff: "some_string_that_is_not_an_int"
some_choices: "choice4"
some_str: 37.5
some_list: {'a': false}
some_dict:
- "foo"
- "bar"
some_int: 37.
some_float: "notafloatisit"
some_path: "anything_is_a_valid_path"
some_raw: {"anything_can_be": "a_raw_type"}
# not sure what would be an invalid jsonarg
# some_jsonarg: "not sure what this does yet"
some_json: |
'{[1, 3, 3] 345345|45v<#!}'
some_jsonarg: |
{"foo": [1, 3, 3]}
# not sure we can load binary in safe_load
some_bytes: !!binary |
R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5
OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+
+f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC
AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs=
some_bits: "foo"
# some_str_nicknames: []
# some_str_akas: {}
some_str_removed_in: "foo"
some_dict_options:
some_second_level: "not-a-bool"
multi_level_option:
second_level:
third_level: "should_be_int"
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: replace py version specific types with generic names so tests work on py2 and py3
set_fact:
# We want to compare if the actual failure messages and the expected failure messages
# are the same. But to compare and do set differences, we have to handle some
# differences between py2/py3.
# The validation failure messages include python type and class reprs, which are
# different between py2 and py3. For ex, "<type 'str'>" vs "<class 'str'>". Plus
# the usual py2/py3 unicode/str/bytes type shenanigans. The 'THE_FLOAT_REPR' is
# because py3 quotes the value in the error while py2 does not, so we just ignore
# the rest of the line.
actual_generic: "{{ ansible_failed_result.argument_errors|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
expected_generic: "{{ expected.test1_1.argument_errors|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
- name: figure out the difference between expected and actual validate_argument_spec failures
set_fact:
actual_not_in_expected: "{{ actual_generic| difference(expected_generic) | sort() }}"
expected_not_in_actual: "{{ expected_generic | difference(actual_generic) | sort() }}"
- name: assert that all actual validate_argument_spec failures were in expected
assert:
that:
- actual_not_in_expected | length == 0
msg: "Actual validate_argument_spec failures that were not expected: {{ actual_not_in_expected }}"
- name: assert that all expected validate_argument_spec failures were in expected
assert:
that:
- expected_not_in_actual | length == 0
msg: "Expected validate_argument_spec failures that were not in actual results: {{ expected_not_in_actual }}"
- name: assert that `validate_args_context` return value has what we expect
assert:
that:
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "test1"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/test1')"
# skip this task if jinja isnt >= 2.7, aka centos6
when:
- jinja2_version.stdout is version('2.7', '>=')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,194 |
Upgrade Sphinx and sphinx_rtd_theme
|
### Summary
Upgrade to at least Sphinx 2.2.2 (to avoid issues with the docs build on macs) and the latest sphinx_rtd_theme.
### Issue Type
Documentation Report
### Component Name
docs/docsite/requirements.txt and docs/docsite/_themes/sphinx_rtd_theme
### Ansible Version
```console
2.12
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
Once https://github.com/readthedocs/sphinx_rtd_theme/issues/1115 is fixed, we can unpin the version of docutils.
### Code of Conduct
I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74194
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2021-04-08T18:32:39Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/known_good_reqs.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,194 |
Upgrade Sphinx and sphinx_rtd_theme
|
### Summary
Upgrade to at least Sphinx 2.2.2 (to avoid issues with the docs build on macs) and the latest sphinx_rtd_theme.
### Issue Type
Documentation Report
### Component Name
docs/docsite/requirements.txt and docs/docsite/_themes/sphinx_rtd_theme
### Ansible Version
```console
2.12
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
Once https://github.com/readthedocs/sphinx_rtd_theme/issues/1115 is fixed, we can unpin the version of docutils.
### Code of Conduct
I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74194
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2021-04-08T18:32:39Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/requirements.txt
|
#pip packages required to build docsite
jinja2
PyYAML
rstcheck
sphinx==2.1.2
sphinx-notfound-page >= 0.6
sphinx-intl
sphinx_ansible_theme === 0.6.0
resolvelib
Pygments >= 2.4.0
straight.plugin # Needed for hacking/build-ansible.py which is the backend build script
antsibull >= 0.25.0
docutils==0.16 # pin for now until sphinx_rtd_theme is compatible with 0.17 or later
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,194 |
Upgrade Sphinx and sphinx_rtd_theme
|
### Summary
Upgrade to at least Sphinx 2.2.2 (to avoid issues with the docs build on macs) and the latest sphinx_rtd_theme.
### Issue Type
Documentation Report
### Component Name
docs/docsite/requirements.txt and docs/docsite/_themes/sphinx_rtd_theme
### Ansible Version
```console
2.12
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
Once https://github.com/readthedocs/sphinx_rtd_theme/issues/1115 is fixed, we can unpin the version of docutils.
### Code of Conduct
I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74194
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2021-04-08T18:32:39Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/rst/community/documentation_contributions.rst
|
.. _community_documentation_contributions:
*****************************************
Contributing to the Ansible Documentation
*****************************************
Ansible has a lot of documentation and a small team of writers. Community support helps us keep up with new features, fixes, and changes.
Improving the documentation is an easy way to make your first contribution to the Ansible project. You do not have to be a programmer, since most of our documentation is written in YAML (module documentation) or `reStructuredText <https://docutils.sourceforge.io/rst.html>`_ (rST). Some collection-level documentation is written in a subset of `Markdown <https://github.com/ansible/ansible/issues/68119#issuecomment-596723053>`_. If you are using Ansible, you already use YAML in your playbooks. rST and Markdown are mostly just text. You do not even need git experience, if you use the ``Edit on GitHub`` option.
If you find a typo, a broken example, a missing topic, or any other error or omission on this documentation website, let us know. Here are some ways to support Ansible documentation:
.. contents::
:local:
Editing docs directly on GitHub
===============================
For typos and other quick fixes, you can edit most of the documentation right from the site. Look at the top right corner of this page. That ``Edit on GitHub`` link is available on all the guide pages in the documentation. If you have a GitHub account, you can submit a quick and easy pull request this way.
.. note::
The source files for individual collection plugins exist in their respective repositories. Follow the link to the collection on Galaxy to find where the repository is located and any guidelines on how to contribute to that collection.
To submit a documentation PR from docs.ansible.com with ``Edit on GitHub``:
#. Click on ``Edit on GitHub``.
#. If you don't already have a fork of the ansible repo on your GitHub account, you'll be prompted to create one.
#. Fix the typo, update the example, or make whatever other change you have in mind.
#. Enter a commit message in the first rectangle under the heading ``Propose file change`` at the bottom of the GitHub page. The more specific, the better. For example, "fixes typo in my_module description". You can put more detail in the second rectangle if you like. Leave the ``+label: docsite_pr`` there.
#. Submit the suggested change by clicking on the green "Propose file change" button. GitHub will handle branching and committing for you, and open a page with the heading "Comparing Changes".
#. Click on ``Create pull request`` to open the PR template.
#. Fill out the PR template, including as much detail as appropriate for your change. You can change the title of your PR if you like (by default it's the same as your commit message). In the ``Issue Type`` section, delete all lines except the ``Docs Pull Request`` line.
#. Submit your change by clicking on ``Create pull request`` button.
#. Be patient while Ansibot, our automated script, adds labels, pings the docs maintainers, and kicks off a CI testing run.
#. Keep an eye on your PR - the docs team may ask you for changes.
Reviewing open PRs and issues
=============================
You can also contribute by reviewing open documentation `issues <https://github.com/ansible/ansible/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Adocs>`_ and `PRs <https://github.com/ansible/ansible/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+label%3Adocs>`_. To add a helpful review, please:
- Include a comment - "looks good to me" only helps if we know why.
- For issues, reproduce the problem.
- For PRs, test the change.
Opening a new issue and/or PR
=============================
If the problem you have noticed is too complex to fix with the ``Edit on GitHub`` option, and no open issue or PR already documents the problem, please open an issue and/or a PR on the correct underlying repo - ``ansible/ansible`` for most pages that are not plugin or module documentation. If the documentation page has no ``Edit on GitHub`` option, check if the page is for a module within a collection. If so, follow the link to the collection on Galaxy and select the ``repo`` button in the upper right corner to find the source repository for that collection and module. The Collection README file should contain information on how to contribute to that collection, or report issues.
A great documentation GitHub issue or PR includes:
- a specific title
- a detailed description of the problem (even for a PR - it's hard to evaluate a suggested change unless we know what problem it's meant to solve)
- links to other information (related issues/PRs, external documentation, pages on docs.ansible.com, and so on)
Verifying your documentation PR
================================
If you make multiple changes to the documentation on ``ansible/ansible``, or add more than a line to it, before you open a pull request, please:
#. Check that your text follows our :ref:`style_guide`.
#. Test your changes for rST errors.
#. Build the page, and preferably the entire documentation site, locally.
.. note::
The following sections apply to documentation sourced from the ``ansible/ansible`` repo and does not apply to documentation from an individual collection. See the collection README file for details on how to contribute to that collection.
Setting up your environment to build documentation locally
----------------------------------------------------------
To build documentation locally, ensure you have a working :ref:`development environment <environment_setup>`.
To work with documentation on your local machine, you need to have python-3.5 or greater and the
following packages installed:
- ``gcc``
- ``jinja2``
- ``libyaml``
- ``make``
- ``Pygments``
- ``pyparsing``
- ``PyYAML``
- ``rstcheck``
- ``six``
- ``sphinx``
- ``sphinx-notfound-page``
- ``straight.plugin``
These required packages are listed in two :file:`requirements.txt` files to make installation easier:
.. code-block:: bash
pip install --user -r requirements.txt
pip install --user -r docs/docsite/requirements.txt
You can drop ``--user`` if you have set up a virtual environment (venv/virtenv).
.. note::
On macOS with Xcode, you may need to install ``six`` and ``pyparsing`` with ``--ignore-installed`` to get versions that work with ``sphinx``.
.. note::
After checking out ``ansible/ansible``, make sure the ``docs/docsite/rst`` directory has strict enough permissions. It should only be writable by the owner's account. If your default ``umask`` is not 022, you can use ``chmod go-w docs/docsite/rst`` to set the permissions correctly in your new branch. Optionally, you can set your ``umask`` to 022 to make all newly created files on your system (including those created by ``git clone``) have the correct permissions.
.. _testing_documentation_locally:
Testing the documentation locally
---------------------------------
To test an individual file for rST errors:
.. code-block:: bash
rstcheck changed_file.rst
Building the documentation locally
----------------------------------
Building the documentation is the best way to check for errors and review your changes. Once `rstcheck` runs with no errors, navigate to ``ansible/docs/docsite`` and then build the page(s) you want to review.
.. note::
If building on macOS with Python 3.8 or later, you must use Sphinx >= 2.2.2. See `#6803 <https://github.com/sphinx-doc/sphinx/pull/6879>`_ for details.
Building a single rST page
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build a single rST file with the make utility:
.. code-block:: bash
make htmlsingle rst=path/to/your_file.rst
For example:
.. code-block:: bash
make htmlsingle rst=community/documentation_contributions.rst
This process compiles all the links but provides minimal log output. If you're writing a new page or want more detailed log output, refer to the instructions on :ref:`build_with_sphinx-build`
.. note::
``make htmlsingle`` adds ``rst/`` to the beginning of the path you provide in ``rst=``, so you can't type the filename with autocomplete. Here are the error messages you will see if you get this wrong:
- If you run ``make htmlsingle`` from the ``docs/docsite/rst/`` directory: ``make: *** No rule to make target `htmlsingle'. Stop.``
- If you run ``make htmlsingle`` from the ``docs/docsite/`` directory with the full path to your rST document: ``sphinx-build: error: cannot find files ['rst/rst/community/documentation_contributions.rst']``.
Building all the rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build all the rST files without any module documentation:
.. code-block:: bash
MODULES=none make webdocs
Building module docs and rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To build documentation for a few modules included in ``ansible/ansible`` plus all the rST files, use a comma-separated list:
.. code-block:: bash
MODULES=one_module,another_module make webdocs
To build all the module documentation plus all the rST files:
.. code-block:: bash
make webdocs
.. _build_with_sphinx-build:
Building rST files with ``sphinx-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advanced users can build one or more rST files with the sphinx utility directly. ``sphinx-build`` returns misleading ``undefined label`` warnings if you only build a single page, because it does not create internal links. However, ``sphinx-build`` returns more extensive syntax feedback, including warnings about indentation errors and ``x-string without end-string`` warnings. This can be useful, especially if you're creating a new page from scratch. To build a page or pages with ``sphinx-build``:
.. code-block:: bash
sphinx-build [options] sourcedir outdir [filenames...]
You can specify filenames, or ``–a`` for all files, or omit both to compile only new/changed files.
For example:
.. code-block:: bash
sphinx-build -b html -c rst/ rst/dev_guide/ _build/html/dev_guide/ rst/dev_guide/developing_modules_documenting.rst
Running the final tests
^^^^^^^^^^^^^^^^^^^^^^^
When you submit a documentation pull request, automated tests are run. Those same tests can be run locally. To do so, navigate to the repository's top directory and run:
.. code-block:: bash
make clean &&
bin/ansible-test sanity --test docs-build &&
bin/ansible-test sanity --test rstcheck
Unfortunately, leftover rST-files from previous document-generating can occasionally confuse these tests. It is therefore safest to run them on a clean copy of the repository, which is the purpose of ``make clean``. If you type these three lines one at a time and manually check the success of each, you do not need the ``&&``.
Joining the documentation working group
=======================================
The Documentation Working Group (DaWGs) meets weekly on Tuesdays on the #ansible-docs channel on the `libera.chat IRC network <https://libera.chat/>`_. For more information, including links to our agenda and a calendar invite, please visit the `working group page in the community repo <https://github.com/ansible/community/wiki/Docs>`_.
.. seealso::
:ref:`More about testing module documentation <testing_module_documentation>`
:ref:`More about documenting modules <module_documenting>`
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,395 |
doc build fails on Python 3.8 + MacOS
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Documentation doesn't currently build on MacOS with Python3.8.
This is due to https://github.com/sphinx-doc/sphinx/issues/6803 which is not fixed in the sphinx version used by ansible documentation build system (it's frozen to 2.1.2)
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
[docs/docsite/requirements.txt](docs/docsite/requirements.txt)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.11
config file = None
configured module search path = ['/Users/waldekm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.11/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
# empty output
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
```
$ uname -rs
Darwin 19.6.0
$ python --version
Python 3.8.5
```
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Error log
```
$ make webdocs
(cd docs/docsite/; CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make docs)
../../hacking/build-ansible.py collection-meta --template-file=../templates/collections_galaxy_meta.rst.j2 --output-dir=rst/dev_guide/ ../../lib/ansible/galaxy/data/collections_galaxy_meta.yml
../../hacking/build-ansible.py document-config --template-file=../templates/config.rst.j2 --output-dir=rst/reference_appendices/ ../../lib/ansible/config/base.yml
mkdir -p rst/cli
../../hacking/build-ansible.py generate-man --template-file=../templates/cli_rst.j2 --output-dir=rst/cli/ --output-format rst ../../lib/ansible/cli/*.py
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
../../hacking/build-ansible.py document-keywords --template-dir=../templates --output-dir=rst/reference_appendices/ ./keyword_desc.yml
if expr "2.11.0.dev0" : '.*[.]dev[0-9]\{1,\}$' &> /dev/null; then \
../../hacking/build-ansible.py docs-build base -o rst ;\
else \
../../hacking/build-ansible.py docs-build full -o rst ;\
fi
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['1 validation error for PluginDocSchema\ndoc -> options -> cache -> deprecated -> removed_from_collection\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.script:plugin_type=inventory|ansible.builtin.script did not return correct DOCUMENTATION. An error page will be generated.
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['18 validation errors for PluginDocSchema\ndoc -> options -> ca_path -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> ca_path -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 1 -> section\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.url:plugin_type=lookup|ansible.builtin.url did not return correct DOCUMENTATION. An error page will be generated.
../bin/testing_formatter.sh
../bin/testing_formatter.sh: line 38: ../docsite/rst/dev_guide/testing/sanity/index.rst: No such file or directory
CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.sphinx html
sphinx-build -M html "rst" "_build" -j 8 -n -w rst_warnings
Running Sphinx v2.1.2
making output directory... done
loading intersphinx inventory from https://docs.python.org/2/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from http://jinja.palletsprojects.com/objects.inv...
intersphinx inventory has moved: http://jinja.palletsprojects.com/objects.inv -> https://jinja.palletsprojects.com/en/2.11.x/objects.inv
loading intersphinx inventory from https://docs.ansible.com/ansible/2.10/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.9/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.8/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.7/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.6/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.5/objects.inv...
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 463 source files that are out of date
updating environment: 463 added, 0 changed, 0 removed
reading sources... [ 5%] 404 .. collections/ansible/builtin/config_lookup
Exception occurred:
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'Builder._read_parallel.<locals>.merge'
The full traceback has been saved in /var/folders/f1/svj655pj0mj11j240_yh47840000gn/T/sphinx-err-w96ydjs6.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make[2]: *** [html] Error 2
make[1]: *** [htmldocs] Error 2
make: *** [webdocs] Error 2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/71395
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2020-08-21T10:46:47Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/known_good_reqs.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,395 |
doc build fails on Python 3.8 + MacOS
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Documentation doesn't currently build on MacOS with Python3.8.
This is due to https://github.com/sphinx-doc/sphinx/issues/6803 which is not fixed in the sphinx version used by ansible documentation build system (it's frozen to 2.1.2)
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
[docs/docsite/requirements.txt](docs/docsite/requirements.txt)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.11
config file = None
configured module search path = ['/Users/waldekm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.11/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
# empty output
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
```
$ uname -rs
Darwin 19.6.0
$ python --version
Python 3.8.5
```
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Error log
```
$ make webdocs
(cd docs/docsite/; CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make docs)
../../hacking/build-ansible.py collection-meta --template-file=../templates/collections_galaxy_meta.rst.j2 --output-dir=rst/dev_guide/ ../../lib/ansible/galaxy/data/collections_galaxy_meta.yml
../../hacking/build-ansible.py document-config --template-file=../templates/config.rst.j2 --output-dir=rst/reference_appendices/ ../../lib/ansible/config/base.yml
mkdir -p rst/cli
../../hacking/build-ansible.py generate-man --template-file=../templates/cli_rst.j2 --output-dir=rst/cli/ --output-format rst ../../lib/ansible/cli/*.py
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
../../hacking/build-ansible.py document-keywords --template-dir=../templates --output-dir=rst/reference_appendices/ ./keyword_desc.yml
if expr "2.11.0.dev0" : '.*[.]dev[0-9]\{1,\}$' &> /dev/null; then \
../../hacking/build-ansible.py docs-build base -o rst ;\
else \
../../hacking/build-ansible.py docs-build full -o rst ;\
fi
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['1 validation error for PluginDocSchema\ndoc -> options -> cache -> deprecated -> removed_from_collection\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.script:plugin_type=inventory|ansible.builtin.script did not return correct DOCUMENTATION. An error page will be generated.
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['18 validation errors for PluginDocSchema\ndoc -> options -> ca_path -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> ca_path -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 1 -> section\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.url:plugin_type=lookup|ansible.builtin.url did not return correct DOCUMENTATION. An error page will be generated.
../bin/testing_formatter.sh
../bin/testing_formatter.sh: line 38: ../docsite/rst/dev_guide/testing/sanity/index.rst: No such file or directory
CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.sphinx html
sphinx-build -M html "rst" "_build" -j 8 -n -w rst_warnings
Running Sphinx v2.1.2
making output directory... done
loading intersphinx inventory from https://docs.python.org/2/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from http://jinja.palletsprojects.com/objects.inv...
intersphinx inventory has moved: http://jinja.palletsprojects.com/objects.inv -> https://jinja.palletsprojects.com/en/2.11.x/objects.inv
loading intersphinx inventory from https://docs.ansible.com/ansible/2.10/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.9/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.8/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.7/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.6/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.5/objects.inv...
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 463 source files that are out of date
updating environment: 463 added, 0 changed, 0 removed
reading sources... [ 5%] 404 .. collections/ansible/builtin/config_lookup
Exception occurred:
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'Builder._read_parallel.<locals>.merge'
The full traceback has been saved in /var/folders/f1/svj655pj0mj11j240_yh47840000gn/T/sphinx-err-w96ydjs6.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make[2]: *** [html] Error 2
make[1]: *** [htmldocs] Error 2
make: *** [webdocs] Error 2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/71395
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2020-08-21T10:46:47Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/requirements.txt
|
#pip packages required to build docsite
jinja2
PyYAML
rstcheck
sphinx==2.1.2
sphinx-notfound-page >= 0.6
sphinx-intl
sphinx_ansible_theme === 0.6.0
resolvelib
Pygments >= 2.4.0
straight.plugin # Needed for hacking/build-ansible.py which is the backend build script
antsibull >= 0.25.0
docutils==0.16 # pin for now until sphinx_rtd_theme is compatible with 0.17 or later
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,395 |
doc build fails on Python 3.8 + MacOS
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Documentation doesn't currently build on MacOS with Python3.8.
This is due to https://github.com/sphinx-doc/sphinx/issues/6803 which is not fixed in the sphinx version used by ansible documentation build system (it's frozen to 2.1.2)
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
[docs/docsite/requirements.txt](docs/docsite/requirements.txt)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.11
config file = None
configured module search path = ['/Users/waldekm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.11/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
# empty output
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
```
$ uname -rs
Darwin 19.6.0
$ python --version
Python 3.8.5
```
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Error log
```
$ make webdocs
(cd docs/docsite/; CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make docs)
../../hacking/build-ansible.py collection-meta --template-file=../templates/collections_galaxy_meta.rst.j2 --output-dir=rst/dev_guide/ ../../lib/ansible/galaxy/data/collections_galaxy_meta.yml
../../hacking/build-ansible.py document-config --template-file=../templates/config.rst.j2 --output-dir=rst/reference_appendices/ ../../lib/ansible/config/base.yml
mkdir -p rst/cli
../../hacking/build-ansible.py generate-man --template-file=../templates/cli_rst.j2 --output-dir=rst/cli/ --output-format rst ../../lib/ansible/cli/*.py
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
../../hacking/build-ansible.py document-keywords --template-dir=../templates --output-dir=rst/reference_appendices/ ./keyword_desc.yml
if expr "2.11.0.dev0" : '.*[.]dev[0-9]\{1,\}$' &> /dev/null; then \
../../hacking/build-ansible.py docs-build base -o rst ;\
else \
../../hacking/build-ansible.py docs-build full -o rst ;\
fi
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['1 validation error for PluginDocSchema\ndoc -> options -> cache -> deprecated -> removed_from_collection\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.script:plugin_type=inventory|ansible.builtin.script did not return correct DOCUMENTATION. An error page will be generated.
ERROR:antsibull:func=write_rst:mod=antsibull.write_docs:nonfatal_errors=['18 validation errors for PluginDocSchema\ndoc -> options -> ca_path -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> ca_path -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> follow_redirects -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> force_basic_auth -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> http_agent -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> timeout -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unix_socket -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> unredirected_headers -> ini -> 1 -> section\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 0 -> key\n field required (type=value_error.missing)\ndoc -> options -> use_gssapi -> ini -> 1 -> section\n field required (type=value_error.missing)']:plugin_name=ansible.builtin.url:plugin_type=lookup|ansible.builtin.url did not return correct DOCUMENTATION. An error page will be generated.
../bin/testing_formatter.sh
../bin/testing_formatter.sh: line 38: ../docsite/rst/dev_guide/testing/sanity/index.rst: No such file or directory
CPUS=8 /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.sphinx html
sphinx-build -M html "rst" "_build" -j 8 -n -w rst_warnings
Running Sphinx v2.1.2
making output directory... done
loading intersphinx inventory from https://docs.python.org/2/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from http://jinja.palletsprojects.com/objects.inv...
intersphinx inventory has moved: http://jinja.palletsprojects.com/objects.inv -> https://jinja.palletsprojects.com/en/2.11.x/objects.inv
loading intersphinx inventory from https://docs.ansible.com/ansible/2.10/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.9/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.8/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.7/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.6/objects.inv...
loading intersphinx inventory from https://docs.ansible.com/ansible/2.5/objects.inv...
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 463 source files that are out of date
updating environment: 463 added, 0 changed, 0 removed
reading sources... [ 5%] 404 .. collections/ansible/builtin/config_lookup
Exception occurred:
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'Builder._read_parallel.<locals>.merge'
The full traceback has been saved in /var/folders/f1/svj655pj0mj11j240_yh47840000gn/T/sphinx-err-w96ydjs6.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make[2]: *** [html] Error 2
make[1]: *** [htmldocs] Error 2
make: *** [webdocs] Error 2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/71395
|
https://github.com/ansible/ansible/pull/74956
|
50b6d28ee168fd8a7cdc80e11aadef745fe6711b
|
58f26388be7fc20aee8a3f43863c7832eea21fb6
| 2020-08-21T10:46:47Z |
python
| 2021-06-22T18:58:54Z |
docs/docsite/rst/community/documentation_contributions.rst
|
.. _community_documentation_contributions:
*****************************************
Contributing to the Ansible Documentation
*****************************************
Ansible has a lot of documentation and a small team of writers. Community support helps us keep up with new features, fixes, and changes.
Improving the documentation is an easy way to make your first contribution to the Ansible project. You do not have to be a programmer, since most of our documentation is written in YAML (module documentation) or `reStructuredText <https://docutils.sourceforge.io/rst.html>`_ (rST). Some collection-level documentation is written in a subset of `Markdown <https://github.com/ansible/ansible/issues/68119#issuecomment-596723053>`_. If you are using Ansible, you already use YAML in your playbooks. rST and Markdown are mostly just text. You do not even need git experience, if you use the ``Edit on GitHub`` option.
If you find a typo, a broken example, a missing topic, or any other error or omission on this documentation website, let us know. Here are some ways to support Ansible documentation:
.. contents::
:local:
Editing docs directly on GitHub
===============================
For typos and other quick fixes, you can edit most of the documentation right from the site. Look at the top right corner of this page. That ``Edit on GitHub`` link is available on all the guide pages in the documentation. If you have a GitHub account, you can submit a quick and easy pull request this way.
.. note::
The source files for individual collection plugins exist in their respective repositories. Follow the link to the collection on Galaxy to find where the repository is located and any guidelines on how to contribute to that collection.
To submit a documentation PR from docs.ansible.com with ``Edit on GitHub``:
#. Click on ``Edit on GitHub``.
#. If you don't already have a fork of the ansible repo on your GitHub account, you'll be prompted to create one.
#. Fix the typo, update the example, or make whatever other change you have in mind.
#. Enter a commit message in the first rectangle under the heading ``Propose file change`` at the bottom of the GitHub page. The more specific, the better. For example, "fixes typo in my_module description". You can put more detail in the second rectangle if you like. Leave the ``+label: docsite_pr`` there.
#. Submit the suggested change by clicking on the green "Propose file change" button. GitHub will handle branching and committing for you, and open a page with the heading "Comparing Changes".
#. Click on ``Create pull request`` to open the PR template.
#. Fill out the PR template, including as much detail as appropriate for your change. You can change the title of your PR if you like (by default it's the same as your commit message). In the ``Issue Type`` section, delete all lines except the ``Docs Pull Request`` line.
#. Submit your change by clicking on ``Create pull request`` button.
#. Be patient while Ansibot, our automated script, adds labels, pings the docs maintainers, and kicks off a CI testing run.
#. Keep an eye on your PR - the docs team may ask you for changes.
Reviewing open PRs and issues
=============================
You can also contribute by reviewing open documentation `issues <https://github.com/ansible/ansible/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Adocs>`_ and `PRs <https://github.com/ansible/ansible/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+label%3Adocs>`_. To add a helpful review, please:
- Include a comment - "looks good to me" only helps if we know why.
- For issues, reproduce the problem.
- For PRs, test the change.
Opening a new issue and/or PR
=============================
If the problem you have noticed is too complex to fix with the ``Edit on GitHub`` option, and no open issue or PR already documents the problem, please open an issue and/or a PR on the correct underlying repo - ``ansible/ansible`` for most pages that are not plugin or module documentation. If the documentation page has no ``Edit on GitHub`` option, check if the page is for a module within a collection. If so, follow the link to the collection on Galaxy and select the ``repo`` button in the upper right corner to find the source repository for that collection and module. The Collection README file should contain information on how to contribute to that collection, or report issues.
A great documentation GitHub issue or PR includes:
- a specific title
- a detailed description of the problem (even for a PR - it's hard to evaluate a suggested change unless we know what problem it's meant to solve)
- links to other information (related issues/PRs, external documentation, pages on docs.ansible.com, and so on)
Verifying your documentation PR
================================
If you make multiple changes to the documentation on ``ansible/ansible``, or add more than a line to it, before you open a pull request, please:
#. Check that your text follows our :ref:`style_guide`.
#. Test your changes for rST errors.
#. Build the page, and preferably the entire documentation site, locally.
.. note::
The following sections apply to documentation sourced from the ``ansible/ansible`` repo and does not apply to documentation from an individual collection. See the collection README file for details on how to contribute to that collection.
Setting up your environment to build documentation locally
----------------------------------------------------------
To build documentation locally, ensure you have a working :ref:`development environment <environment_setup>`.
To work with documentation on your local machine, you need to have python-3.5 or greater and the
following packages installed:
- ``gcc``
- ``jinja2``
- ``libyaml``
- ``make``
- ``Pygments``
- ``pyparsing``
- ``PyYAML``
- ``rstcheck``
- ``six``
- ``sphinx``
- ``sphinx-notfound-page``
- ``straight.plugin``
These required packages are listed in two :file:`requirements.txt` files to make installation easier:
.. code-block:: bash
pip install --user -r requirements.txt
pip install --user -r docs/docsite/requirements.txt
You can drop ``--user`` if you have set up a virtual environment (venv/virtenv).
.. note::
On macOS with Xcode, you may need to install ``six`` and ``pyparsing`` with ``--ignore-installed`` to get versions that work with ``sphinx``.
.. note::
After checking out ``ansible/ansible``, make sure the ``docs/docsite/rst`` directory has strict enough permissions. It should only be writable by the owner's account. If your default ``umask`` is not 022, you can use ``chmod go-w docs/docsite/rst`` to set the permissions correctly in your new branch. Optionally, you can set your ``umask`` to 022 to make all newly created files on your system (including those created by ``git clone``) have the correct permissions.
.. _testing_documentation_locally:
Testing the documentation locally
---------------------------------
To test an individual file for rST errors:
.. code-block:: bash
rstcheck changed_file.rst
Building the documentation locally
----------------------------------
Building the documentation is the best way to check for errors and review your changes. Once `rstcheck` runs with no errors, navigate to ``ansible/docs/docsite`` and then build the page(s) you want to review.
.. note::
If building on macOS with Python 3.8 or later, you must use Sphinx >= 2.2.2. See `#6803 <https://github.com/sphinx-doc/sphinx/pull/6879>`_ for details.
Building a single rST page
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build a single rST file with the make utility:
.. code-block:: bash
make htmlsingle rst=path/to/your_file.rst
For example:
.. code-block:: bash
make htmlsingle rst=community/documentation_contributions.rst
This process compiles all the links but provides minimal log output. If you're writing a new page or want more detailed log output, refer to the instructions on :ref:`build_with_sphinx-build`
.. note::
``make htmlsingle`` adds ``rst/`` to the beginning of the path you provide in ``rst=``, so you can't type the filename with autocomplete. Here are the error messages you will see if you get this wrong:
- If you run ``make htmlsingle`` from the ``docs/docsite/rst/`` directory: ``make: *** No rule to make target `htmlsingle'. Stop.``
- If you run ``make htmlsingle`` from the ``docs/docsite/`` directory with the full path to your rST document: ``sphinx-build: error: cannot find files ['rst/rst/community/documentation_contributions.rst']``.
Building all the rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build all the rST files without any module documentation:
.. code-block:: bash
MODULES=none make webdocs
Building module docs and rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To build documentation for a few modules included in ``ansible/ansible`` plus all the rST files, use a comma-separated list:
.. code-block:: bash
MODULES=one_module,another_module make webdocs
To build all the module documentation plus all the rST files:
.. code-block:: bash
make webdocs
.. _build_with_sphinx-build:
Building rST files with ``sphinx-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advanced users can build one or more rST files with the sphinx utility directly. ``sphinx-build`` returns misleading ``undefined label`` warnings if you only build a single page, because it does not create internal links. However, ``sphinx-build`` returns more extensive syntax feedback, including warnings about indentation errors and ``x-string without end-string`` warnings. This can be useful, especially if you're creating a new page from scratch. To build a page or pages with ``sphinx-build``:
.. code-block:: bash
sphinx-build [options] sourcedir outdir [filenames...]
You can specify filenames, or ``–a`` for all files, or omit both to compile only new/changed files.
For example:
.. code-block:: bash
sphinx-build -b html -c rst/ rst/dev_guide/ _build/html/dev_guide/ rst/dev_guide/developing_modules_documenting.rst
Running the final tests
^^^^^^^^^^^^^^^^^^^^^^^
When you submit a documentation pull request, automated tests are run. Those same tests can be run locally. To do so, navigate to the repository's top directory and run:
.. code-block:: bash
make clean &&
bin/ansible-test sanity --test docs-build &&
bin/ansible-test sanity --test rstcheck
Unfortunately, leftover rST-files from previous document-generating can occasionally confuse these tests. It is therefore safest to run them on a clean copy of the repository, which is the purpose of ``make clean``. If you type these three lines one at a time and manually check the success of each, you do not need the ``&&``.
Joining the documentation working group
=======================================
The Documentation Working Group (DaWGs) meets weekly on Tuesdays on the #ansible-docs channel on the `libera.chat IRC network <https://libera.chat/>`_. For more information, including links to our agenda and a calendar invite, please visit the `working group page in the community repo <https://github.com/ansible/community/wiki/Docs>`_.
.. seealso::
:ref:`More about testing module documentation <testing_module_documentation>`
:ref:`More about documenting modules <module_documenting>`
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,142 |
SSH module takes retry option from task keywords, defaulting to 3
|
### Summary
Starting with ansible-core-2.11.0, the `ssh` module operates with a `retry` setting of `3`.
This has undesired impact when using a `sudo` password and making a typo in the password: the module would make 4 connection attempts, attempting a sudo in each connection attempt.
On hosts that report failed sudo attempts via email - this generates 4 emails per Ansible run per host - 4x increase from behaviour with previous version.
I have tracked this to commit 935528e22e5, which changed how the ssh module gets its settings - using `get_option` instead of the ConfigurationManager.
I have worked around this by explicitly setting `retries` on the first task in the playbook - but this is just a workaround.
I hope there would still be a way to set retries for the SSH module (or make it not retry on failed privilege escalation).
Cheers,
Vlad
### Issue Type
Bug Report
### Component Name
ssh
### Ansible Version
```console
ansible [core 2.11.2]
config file = /Users/vlad/reannz/code/ansible-playbooks/ansible.cfg
configured module search path = ['/Users/vlad/reannz/code/ansible-playbooks/library']
ansible python module location = /usr/local/Cellar/ansible/4.1.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/vlad/.ansible/collections:/opt/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:29:30) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
DEFAULT_BECOME(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = root
DEFAULT_HOST_LIST(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = ['/Users/vlad/reannz/code/ansible-playbooks/environment/production']
```
### OS / Environment
Controller: OSX Mojave 10.14.6
Target: Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```ini
# ansible.cfg
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
```
Run:
```
ansible -u $USER -m command -a 'id' 'target-host' -K
```
### Expected Results
I expect sudo attempted only once.
### Actual Results
```console
Sudo is attempted 4 times (as per `/var/log/auth.log` on target host)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75142
|
https://github.com/ansible/ansible/pull/75155
|
23a84902cb9599fe958a86e7a95520837964726a
|
a8de35e1318cf03b26c7a2a08900d8bec0611a01
| 2021-06-30T04:23:57Z |
python
| 2021-07-06T14:43:25Z |
changelogs/fragments/75142-ssh-retries-collision.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,142 |
SSH module takes retry option from task keywords, defaulting to 3
|
### Summary
Starting with ansible-core-2.11.0, the `ssh` module operates with a `retry` setting of `3`.
This has undesired impact when using a `sudo` password and making a typo in the password: the module would make 4 connection attempts, attempting a sudo in each connection attempt.
On hosts that report failed sudo attempts via email - this generates 4 emails per Ansible run per host - 4x increase from behaviour with previous version.
I have tracked this to commit 935528e22e5, which changed how the ssh module gets its settings - using `get_option` instead of the ConfigurationManager.
I have worked around this by explicitly setting `retries` on the first task in the playbook - but this is just a workaround.
I hope there would still be a way to set retries for the SSH module (or make it not retry on failed privilege escalation).
Cheers,
Vlad
### Issue Type
Bug Report
### Component Name
ssh
### Ansible Version
```console
ansible [core 2.11.2]
config file = /Users/vlad/reannz/code/ansible-playbooks/ansible.cfg
configured module search path = ['/Users/vlad/reannz/code/ansible-playbooks/library']
ansible python module location = /usr/local/Cellar/ansible/4.1.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/vlad/.ansible/collections:/opt/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:29:30) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
DEFAULT_BECOME(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = root
DEFAULT_HOST_LIST(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = ['/Users/vlad/reannz/code/ansible-playbooks/environment/production']
```
### OS / Environment
Controller: OSX Mojave 10.14.6
Target: Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```ini
# ansible.cfg
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
```
Run:
```
ansible -u $USER -m command -a 'id' 'target-host' -K
```
### Expected Results
I expect sudo attempted only once.
### Actual Results
```console
Sudo is attempted 4 times (as per `/var/log/auth.log` on target host)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75142
|
https://github.com/ansible/ansible/pull/75155
|
23a84902cb9599fe958a86e7a95520837964726a
|
a8de35e1318cf03b26c7a2a08900d8bec0611a01
| 2021-06-30T04:23:57Z |
python
| 2021-07-06T14:43:25Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
tr = TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
if tr.is_failed() or tr.is_unreachable():
self._final_q.send_callback('v2_runner_item_on_failed', tr)
elif tr.is_skipped():
self._final_q.send_callback('v2_runner_item_on_skipped', tr)
else:
if getattr(self._task, 'diff', False):
self._final_q.send_callback('v2_on_file_diff', tr)
self._final_q.send_callback('v2_runner_item_on_ok', tr)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
try:
# TODO: remove play_context as this does not take delegation into account, task itself should hold values
# for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
if context_validation_error is not None and not (self._task.delegate_to and isinstance(context_validation_error, AnsibleUndefinedVariable)):
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
else:
# just use normal host vars
cvars = orig_vars = variables
templar.available_variables = cvars
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
templar.available_variables = orig_vars
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules
# special handling for python interpreter for network_os, default to ansible python unless overriden
if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars:
# this also avoids 'python discovery'
cvars['ansible_python_interpreter'] = sys.executable
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
self._task.action, self._task.args, self._task.module_defaults, templar, self._task._ansible_internal_redirect_list
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except (AnsibleActionFail, AnsibleActionSkip) as e:
return e.result
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
if result.get('failed'):
self._final_q.send_callback(
'v2_runner_on_async_failed',
TaskResult(self._host.name,
self._task, # We send the full task here, because the controller knows nothing about it, the TE created it
result,
task_fields=self._task.dump_attrs()))
else:
self._final_q.send_callback(
'v2_runner_on_async_ok',
TaskResult(self._host.name,
self._task, # We send the full task here, because the controller knows nothing about it, the TE created it
result,
task_fields=self._task.dump_attrs()))
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
try:
condname = 'changed'
_evaluate_changed_when_result(result)
condname = 'failed'
_evaluate_failed_when_result(result)
except AnsibleError as e:
result['failed'] = True
result['%s_when_result' % condname] = to_text(e)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_callback(
'v2_runner_retry',
TaskResult(
self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()
)
)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# note: here for callbacks that rely on this info to display delegation
for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'):
if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars:
result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host.name,
async_task, # We send the full task here, because the controller knows nothing about it, the TE created it
async_result,
task_fields=self._task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task().load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
self._play_context.connection = templar.template(cvars['ansible_connection'])
else:
self._play_context.connection = self._task.connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,142 |
SSH module takes retry option from task keywords, defaulting to 3
|
### Summary
Starting with ansible-core-2.11.0, the `ssh` module operates with a `retry` setting of `3`.
This has undesired impact when using a `sudo` password and making a typo in the password: the module would make 4 connection attempts, attempting a sudo in each connection attempt.
On hosts that report failed sudo attempts via email - this generates 4 emails per Ansible run per host - 4x increase from behaviour with previous version.
I have tracked this to commit 935528e22e5, which changed how the ssh module gets its settings - using `get_option` instead of the ConfigurationManager.
I have worked around this by explicitly setting `retries` on the first task in the playbook - but this is just a workaround.
I hope there would still be a way to set retries for the SSH module (or make it not retry on failed privilege escalation).
Cheers,
Vlad
### Issue Type
Bug Report
### Component Name
ssh
### Ansible Version
```console
ansible [core 2.11.2]
config file = /Users/vlad/reannz/code/ansible-playbooks/ansible.cfg
configured module search path = ['/Users/vlad/reannz/code/ansible-playbooks/library']
ansible python module location = /usr/local/Cellar/ansible/4.1.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/vlad/.ansible/collections:/opt/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:29:30) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
DEFAULT_BECOME(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = root
DEFAULT_HOST_LIST(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = ['/Users/vlad/reannz/code/ansible-playbooks/environment/production']
```
### OS / Environment
Controller: OSX Mojave 10.14.6
Target: Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```ini
# ansible.cfg
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
```
Run:
```
ansible -u $USER -m command -a 'id' 'target-host' -K
```
### Expected Results
I expect sudo attempted only once.
### Actual Results
```console
Sudo is attempted 4 times (as per `/var/log/auth.log` on target host)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75142
|
https://github.com/ansible/ansible/pull/75155
|
23a84902cb9599fe958a86e7a95520837964726a
|
a8de35e1318cf03b26c7a2a08900d8bec0611a01
| 2021-06-30T04:23:57Z |
python
| 2021-07-06T14:43:25Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
notes:
- Many options default to 'None' here but that only means we don't override the ssh tool's defaults and/or configuration.
For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config).
options:
host:
description: Hostname/ip to connect to.
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: delegated_vars['ansible_host']
- name: delegated_vars['ansible_ssh_host']
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description:
- Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
- Defaults to ``Enter PIN for`` when pkcs11_provider is set.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: scp_extra_args
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: sftp_extra_args
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
retries:
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
cli:
- name: user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
cli:
- name: private_key_file
option: '--private-key'
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null (default), ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
ssh_transfer_method:
description:
- "Preferred method to use when transferring files over ssh"
- Setting to 'smart' (default) will try them in order, until one succeeds or they all fail
- Using 'piped' creates an ssh pipe with ``dd`` on either side to copy the data
choices: ['sftp', 'scp', 'piped', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
vars:
- name: ansible_ssh_transfer_method
version_added: '2.12'
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
timeout:
default: 10
description:
- This is the default ammount of time we will wait while establishing an ssh connection
- It also controls how long we can wait to access reading the connection once established (select on the socket)
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
version_added: '2.11'
ini:
- key: timeout
section: defaults
- key: timeout
section: ssh_connection
version_added: '2.11'
vars:
- name: ansible_ssh_timeout
version_added: '2.11'
cli:
- name: timeout
type: integer
pkcs11_provider:
version_added: '2.12'
default: ""
description:
- "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so"
- Requires sshpass version 1.06+, sshpass must support the -P option
env: [{name: ANSIBLE_PKCS11_PROVIDER}]
ini:
- {key: pkcs11_provider, section: ssh_connection}
vars:
- name: ansible_ssh_pkcs11_provider
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(self.get_option('retries')) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
# TODO: this should come from task
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
# TODO: all should come from get_option(), but not might be set at this point yet
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = None
self.control_path_dir = None
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
pkcs11_provider = self.get_option("pkcs11_provider")
if conn_password or pkcs11_provider:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program")
if not conn_password and pkcs11_provider:
raise AnsibleError("to use pkcs11_provider you must specify a password/pin")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if not password_prompt and pkcs11_provider:
# Set default password prompt for pkcs11_provider to make it clear its a PIN
password_prompt = 'Enter PIN for '
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# pkcs11 mode allows the use of Smartcards or Yubikey devices
if conn_password and pkcs11_provider:
self._add_args(b_command,
(b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=publickey",
b"-o", b"PasswordAuthentication=no",
b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)),
u'Enable pkcs11')
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and self.get_option('sftp_batch_mode'):
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
# Next, we add ssh_args
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments that have their own specific settings defined in docs above.
if not self.get_option('host_key_checking'):
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
self.port = self.get_option('port')
if self.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self.get_option('private_key_file')
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
self.user = self.get_option('remote_user')
if self.user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
timeout = self.get_option('timeout')
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = self.get_option(opt)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"Set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
self.control_path_dir = self.get_option('control_path_dir')
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
self.control_path = self.get_option('control_path')
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self.get_option('timeout')
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if self.get_option('host_key_checking'):
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self.get_option('ssh_transfer_method')
scp_if_ssh = self.get_option('scp_if_ssh')
if ssh_transfer_method is None and scp_if_ssh == 'smart':
ssh_transfer_method = 'smart'
if ssh_transfer_method is not None:
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable')
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
run_reset = False
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host)
display.vvv(u'sending connection check: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.vvv(u"No connection to reset: %s" % to_text(stderr))
else:
run_reset = True
if run_reset:
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host)
display.vvv(u'sending connection stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,142 |
SSH module takes retry option from task keywords, defaulting to 3
|
### Summary
Starting with ansible-core-2.11.0, the `ssh` module operates with a `retry` setting of `3`.
This has undesired impact when using a `sudo` password and making a typo in the password: the module would make 4 connection attempts, attempting a sudo in each connection attempt.
On hosts that report failed sudo attempts via email - this generates 4 emails per Ansible run per host - 4x increase from behaviour with previous version.
I have tracked this to commit 935528e22e5, which changed how the ssh module gets its settings - using `get_option` instead of the ConfigurationManager.
I have worked around this by explicitly setting `retries` on the first task in the playbook - but this is just a workaround.
I hope there would still be a way to set retries for the SSH module (or make it not retry on failed privilege escalation).
Cheers,
Vlad
### Issue Type
Bug Report
### Component Name
ssh
### Ansible Version
```console
ansible [core 2.11.2]
config file = /Users/vlad/reannz/code/ansible-playbooks/ansible.cfg
configured module search path = ['/Users/vlad/reannz/code/ansible-playbooks/library']
ansible python module location = /usr/local/Cellar/ansible/4.1.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/vlad/.ansible/collections:/opt/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:29:30) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
DEFAULT_BECOME(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = root
DEFAULT_HOST_LIST(/Users/vlad/reannz/code/ansible-playbooks/ansible.cfg) = ['/Users/vlad/reannz/code/ansible-playbooks/environment/production']
```
### OS / Environment
Controller: OSX Mojave 10.14.6
Target: Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```ini
# ansible.cfg
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
```
Run:
```
ansible -u $USER -m command -a 'id' 'target-host' -K
```
### Expected Results
I expect sudo attempted only once.
### Actual Results
```console
Sudo is attempted 4 times (as per `/var/log/auth.log` on target host)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75142
|
https://github.com/ansible/ansible/pull/75155
|
23a84902cb9599fe958a86e7a95520837964726a
|
a8de35e1318cf03b26c7a2a08900d8bec0611a01
| 2021-06-30T04:23:57Z |
python
| 2021-07-06T14:43:25Z |
test/units/plugins/connection/test_ssh.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from io import StringIO
import pytest
from ansible import constants as C
from ansible.errors import AnsibleAuthenticationFailure
from units.compat import unittest
from units.compat.mock import patch, MagicMock, PropertyMock
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.compat.selectors import SelectorKey, EVENT_READ
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import ssh
from ansible.plugins.loader import connection_loader, become_loader
class TestConnectionBaseClass(unittest.TestCase):
def test_plugins_connection_ssh_module(self):
play_context = PlayContext()
play_context.prompt = (
'[sudo via ansible, key=ouzmdnewuhucvuaabtjmweasarviygqq] password: '
)
in_stream = StringIO()
self.assertIsInstance(ssh.Connection(play_context, in_stream), ssh.Connection)
def test_plugins_connection_ssh_basic(self):
pc = PlayContext()
new_stdin = StringIO()
conn = ssh.Connection(pc, new_stdin)
# connect just returns self, so assert that
res = conn._connect()
self.assertEqual(conn, res)
ssh.SSHPASS_AVAILABLE = False
self.assertFalse(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = True
self.assertTrue(conn._sshpass_available())
with patch('subprocess.Popen') as p:
ssh.SSHPASS_AVAILABLE = None
p.return_value = MagicMock()
self.assertTrue(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = None
p.return_value = None
p.side_effect = OSError()
self.assertFalse(conn._sshpass_available())
conn.close()
self.assertFalse(conn._connected)
def test_plugins_connection_ssh__build_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.get_option = MagicMock()
conn.get_option.return_value = ""
conn._build_command('ssh', 'ssh')
def test_plugins_connection_ssh_exec_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._build_command.return_value = 'ssh something something'
conn._run = MagicMock()
conn._run.return_value = (0, 'stdout', 'stderr')
conn.get_option = MagicMock()
conn.get_option.return_value = True
res, stdout, stderr = conn.exec_command('ssh')
res, stdout, stderr = conn.exec_command('ssh', 'this is some data')
def test_plugins_connection_ssh__examine_output(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn.become.check_password_prompt = MagicMock()
conn.become.check_success = MagicMock()
conn.become.check_incorrect_password = MagicMock()
conn.become.check_missing_password = MagicMock()
def _check_password_prompt(line):
if b'foo' in line:
return True
return False
def _check_become_success(line):
if b'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz' in line:
return True
return False
def _check_incorrect_password(line):
if b'incorrect password' in line:
return True
return False
def _check_missing_password(line):
if b'bad password' in line:
return True
return False
# test examining output for prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = True
# override become plugin
conn.become.prompt = True
conn.become.check_password_prompt = MagicMock(side_effect=_check_password_prompt)
conn.become.check_success = MagicMock(side_effect=_check_become_success)
conn.become.check_incorrect_password = MagicMock(side_effect=_check_incorrect_password)
conn.become.check_missing_password = MagicMock(side_effect=_check_missing_password)
def get_option(option):
if option == 'become_pass':
return 'password'
return None
conn.become.get_option = get_option
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nfoo\nline 3\nthis should be the remainder', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'this should be the remainder')
self.assertTrue(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
conn.become.success = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nBECOME-SUCCESS-abcdefghijklmnopqrstuvxyz\nline 3\n', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertTrue(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become failure
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nincorrect password\n', True)
self.assertEqual(output, b'line 1\nline 2\nincorrect password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertTrue(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for missing password
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nbad password\n', True)
self.assertEqual(output, b'line 1\nbad password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertTrue(conn._flags['become_nopasswd_error'])
@patch('time.sleep')
@patch('os.path.exists')
def test_plugins_connection_ssh_put_file(self, mock_ospe, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
mock_ospe.return_value = True
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
conn.set_option('retries', 9)
conn.set_option('ssh_transfer_method', None) # unless set to None scp_if_ssh is ignored
# Test with SCP_IF_SSH set to smart
# Test when SFTP works
conn.set_option('scp_if_ssh', 'smart')
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with SCP_IF_SSH enabled
conn.set_option('scp_if_ssh', True)
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.put_file(u'/path/to/in/file/with/unicode-fö〩', u'/path/to/dest/file/with/unicode-fö〩')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with SCPP_IF_SSH disabled
conn.set_option('scp_if_ssh', False)
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'put',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fö〩')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fö〩')))) + b'\n'
conn.put_file(u'/path/to/in/file/with/unicode-fö〩', u'/path/to/dest/file/with/unicode-fö〩')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
# test that a not-found path raises an error
mock_ospe.return_value = False
conn._bare_run.return_value = (0, 'stdout', '')
self.assertRaises(AnsibleFileNotFound, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
@patch('time.sleep')
def test_plugins_connection_ssh_fetch_file(self, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
conn._load_name = 'ssh'
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
conn.set_option('retries', 9)
conn.set_option('ssh_transfer_method', None) # unless set to None scp_if_ssh is ignored
# Test with SCP_IF_SSH set to smart
# Test when SFTP works
conn.set_option('scp_if_ssh', 'smart')
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.set_options({})
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with SCP_IF_SSH enabled
conn._bare_run.side_effect = None
conn.set_option('ssh_transfer_method', None) # unless set to None scp_if_ssh is ignored
conn.set_option('scp_if_ssh', 'True')
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.fetch_file(u'/path/to/in/file/with/unicode-fö〩', u'/path/to/dest/file/with/unicode-fö〩')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with SCP_IF_SSH disabled
conn.set_option('scp_if_ssh', False)
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'get',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fö〩')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fö〩')))) + b'\n'
conn.fetch_file(u'/path/to/in/file/with/unicode-fö〩', u'/path/to/dest/file/with/unicode-fö〩')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.fetch_file, '/path/to/bad/file', '/remote/path/to/file')
class MockSelector(object):
def __init__(self):
self.files_watched = 0
self.register = MagicMock(side_effect=self._register)
self.unregister = MagicMock(side_effect=self._unregister)
self.close = MagicMock()
self.get_map = MagicMock(side_effect=self._get_map)
self.select = MagicMock()
def _register(self, *args, **kwargs):
self.files_watched += 1
def _unregister(self, *args, **kwargs):
self.files_watched -= 1
def _get_map(self, *args, **kwargs):
return self.files_watched
@pytest.fixture
def mock_run_env(request, mocker):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn._send_initial_data = MagicMock()
conn._examine_output = MagicMock()
conn._terminate_process = MagicMock()
conn._load_name = 'ssh'
conn.sshpass_pipe = [MagicMock(), MagicMock()]
request.cls.pc = pc
request.cls.conn = conn
mock_popen_res = MagicMock()
mock_popen_res.poll = MagicMock()
mock_popen_res.wait = MagicMock()
mock_popen_res.stdin = MagicMock()
mock_popen_res.stdin.fileno.return_value = 1000
mock_popen_res.stdout = MagicMock()
mock_popen_res.stdout.fileno.return_value = 1001
mock_popen_res.stderr = MagicMock()
mock_popen_res.stderr.fileno.return_value = 1002
mock_popen_res.returncode = 0
request.cls.mock_popen_res = mock_popen_res
mock_popen = mocker.patch('subprocess.Popen', return_value=mock_popen_res)
request.cls.mock_popen = mock_popen
request.cls.mock_selector = MockSelector()
mocker.patch('ansible.module_utils.compat.selectors.DefaultSelector', lambda: request.cls.mock_selector)
request.cls.mock_openpty = mocker.patch('pty.openpty')
mocker.patch('fcntl.fcntl')
mocker.patch('os.write')
mocker.patch('os.close')
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRun(object):
# FIXME:
# These tests are little more than a smoketest. Need to enhance them
# a bit to check that they're calling the relevant functions and making
# complete coverage of the code paths
def test_no_escalation(self):
self.mock_popen_res.stdout.read.side_effect = [b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"my_stderr"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_with_password(self):
# test with a password set to trigger the sshpass write
self.pc.password = '12345'
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run(["ssh", "is", "a", "cmd"], "this is more data")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is more data'
def _password_with_prompt_examine_output(self, sourice, state, b_chunk, sudoable):
if state == 'awaiting_prompt':
self.conn._flags['become_prompt'] = True
elif state == 'awaiting_escalation':
self.conn._flags['become_success'] = True
return (b'', b'')
def test_password_with_prompt(self):
# test with password prompting enabled
self.pc.password = None
self.conn.become.prompt = b'Password:'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"Success", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ),
(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b''
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_password_with_become(self):
# test with some become settings
self.pc.prompt = b'Password:'
self.conn.become.prompt = b'Password:'
self.pc.become = True
self.pc.success_key = 'BECOME-SUCCESS-abcdefg'
self.conn.become._id = 'abcdefg'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"BECOME-SUCCESS-abcdefg", b"abc"]
self.mock_popen_res.stderr.read.side_effect = [b"123"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
self.mock_popen_res.stdin.flush.assert_called_once_with()
assert return_code == 0
assert b_stdout == b'abc'
assert b_stderr == b'123'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_pasword_without_data(self):
# simulate no data input but Popen using new pty's fails
self.mock_popen.return_value = None
self.mock_popen.side_effect = [OSError(), self.mock_popen_res]
# simulate no data input
self.mock_openpty.return_value = (98, 99)
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is False
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRetries(object):
def test_incorrect_password(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 5)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b'']
self.mock_popen_res.stderr.read.side_effect = [b'Permission denied, please try again.\r\n']
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[5] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = [b'sshpass', b'-d41', b'ssh', b'-C']
exception_info = pytest.raises(AnsibleAuthenticationFailure, self.conn.exec_command, 'sshpass', 'some data')
assert exception_info.value.message == ('Invalid/incorrect username/password. Skipping remaining 5 retries to prevent account lockout: '
'Permission denied, please try again.')
assert self.mock_popen.call_count == 1
def test_retry_then_success(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 3 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
return_code, b_stdout, b_stderr = self.conn.exec_command('ssh', 'some data')
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
def test_multiple_failures(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b""] * 10
self.mock_popen_res.stderr.read.side_effect = [b""] * 10
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 30)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
] * 10
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
pytest.raises(AnsibleConnectionFailure, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_abitrary_exceptions(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.mock_popen.side_effect = [Exception('bad')] * 10
pytest.raises(Exception, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_put_file_retries(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.put_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
def test_fetch_file_retries(self, monkeypatch):
self.conn.set_option('host_key_checking', False)
self.conn.set_option('retries', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,185 |
Nightly CI with OSX 10.11 fails
|
### Summary
This affects both ansible-core's CI:
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19596&view=results (stable-2.9)
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19594&view=results (stable-2.10)
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19593&view=results (stable-2.11)
...and also collection CI such as community.crypto and community.general.
The problem is that the latest [packaging](https://pypi.org/project/packaging/) release drops support for Python < 3.6, and the pip version installed in the VM ignores that. Since it is installed as part of ansible-test itself, this needs to be fixed in ansible-test itself.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
stable-2.9, stable-2.10, stable-2.11
```
### Configuration
```console
*
```
### OS / Environment
OSX 10.11
### Steps to Reproduce
*
### Expected Results
*
### Actual Results
```console
*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75185
|
https://github.com/ansible/ansible/pull/75186
|
bd03fa811b61528b242d5ef4e0f0f9dc580d2d6f
|
67bc49e001e6cbd6736e1dbc3d8b07f9bccda2bb
| 2021-07-05T19:10:43Z |
python
| 2021-07-07T13:19:05Z |
changelogs/fragments/75186-ansible-test-packaging-constraint.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,185 |
Nightly CI with OSX 10.11 fails
|
### Summary
This affects both ansible-core's CI:
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19596&view=results (stable-2.9)
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19594&view=results (stable-2.10)
* https://dev.azure.com/ansible/ansible/_build/results?buildId=19593&view=results (stable-2.11)
...and also collection CI such as community.crypto and community.general.
The problem is that the latest [packaging](https://pypi.org/project/packaging/) release drops support for Python < 3.6, and the pip version installed in the VM ignores that. Since it is installed as part of ansible-test itself, this needs to be fixed in ansible-test itself.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
stable-2.9, stable-2.10, stable-2.11
```
### Configuration
```console
*
```
### OS / Environment
OSX 10.11
### Steps to Reproduce
*
### Expected Results
*
### Actual Results
```console
*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75185
|
https://github.com/ansible/ansible/pull/75186
|
bd03fa811b61528b242d5ef4e0f0f9dc580d2d6f
|
67bc49e001e6cbd6736e1dbc3d8b07f9bccda2bb
| 2021-07-05T19:10:43Z |
python
| 2021-07-07T13:19:05Z |
test/lib/ansible_test/_data/requirements/constraints.txt
|
resolvelib >= 0.5.3, < 0.6.0 # keep in sync with `requirements.txt`
coverage >= 4.5.1, < 5.0.0 ; python_version < '3.7' # coverage 4.4 required for "disable_warnings" support but 4.5.1 needed for bug fixes, coverage 5.0+ incompatible
coverage >= 4.5.2, < 5.0.0 ; python_version == '3.7' # coverage 4.5.2 fixes bugs in support for python 3.7, coverage 5.0+ incompatible
coverage >= 4.5.4, < 5.0.0 ; python_version > '3.7' # coverage had a bug in < 4.5.4 that would cause unit tests to hang in Python 3.8, coverage 5.0+ incompatible
decorator < 5.0.0 ; python_version < '3.5' # decorator 5.0.5 and later require python 3.5 or later
six < 1.14.0 ; python_version < '2.7' # six 1.14.0 drops support for python 2.6
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
# do not add a cryptography constraint here unless it is for python version incompatibility, see the get_cryptography_requirement function in executor.py for details
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
jinja2 < 2.11 ; python_version < '2.7' # jinja2 2.11 and later require python 2.7 or later
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx <= 2.1.2 ; python_version >= '2.7' # docs team hasn't tested beyond 2.1.2 yet
rstcheck >=3.3.1 # required for sphinx version >= 1.8
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6, >= 2.5 # linode requires idna < 2.9, >= 2.5, requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
openshift >= 0.6.2, < 0.9.0 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pathspec < 0.6.0 ; python_version < '2.7' # pathspec 0.6.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyparsing < 3.0.0 ; python_version < '3.5' # pyparsing 3 and later require python 3.5 or later
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
boto3 < 1.11 ; python_version < '2.7' # boto3 1.11 drops Python 2.6 support
botocore >= 1.10.0, < 1.14 ; python_version < '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca; botocore 1.14 drops Python 2.6 support
botocore >= 1.10.0 ; python_version >= '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca
setuptools < 37 ; python_version == '2.6' # setuptools 37 and later require python 2.7 or later
setuptools < 45 ; python_version == '2.7' # setuptools 45 and later require python 3.5 or later
gssapi < 1.6.0 ; python_version <= '2.7' # gssapi 1.6.0 and later require python 3 or later
pyspnego >= 0.1.6 ; python_version >= '3.10' # bug in older releases breaks on Python 3.10
MarkupSafe < 2.0.0 ; python_version < '3.6' # MarkupSafe >= 2.0.0. requires Python >= 3.6
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,076 |
Localized strings to match for privilage escalation on z/OS not working
|
### Summary
When using localized_prompts with privilege escalation in z/OS the exact prompt is not matched with the current regular expression.
A z/OS prompt does not follow any standard you might expect , it looks like `FSUM5019 Enter the password for IBMUSER: ` therefore needing an `ansible.cfg` entry like:
```
[su_become_plugin]
localized_prompts = FSUM5019 Enter the password for IBMUSER:
```
This issue directly relates to it and was only a partial fix implementing the `list` in this [git issue 73837](https://github.com/ansible/ansible/issues/73837). It appears to me the regular expression used in this [line: b_password_string = b_password_string + to_bytes(u' ?(:|:) ?')](https://github.com/ansible/ansible/blob/c404a9003fbfc56785d32e3f6e6ab005d0467927/lib/ansible/plugins/become/su.py#L140) is incorrect in that the regex optional group `' ?(:|:) ?'` should not have a leading space such that it should be `' ?(:|:)?'`.
With this suggested change `' ?(:|:)?'`, I can run the localized prompt as noted above and other shorter variations.
Link to test the regex:
https://regex101.com/r/LgpEXY/1
https://regex101.com/r/wAHvlr/2
I have confirmed my changes works in Ansible `v2.10` and `v2.11`, given you don't have the hardware I can also tag @liamchent from the previous [git issue 73837](https://github.com/ansible/ansible/issues/73837) to certify this.
### Issue Type
Bug Report
### Component Name
ansible.builtin.su
### Ansible Version
```console
ansible [core 2.11.1]
config file = /Users/ddimatos/git/github/playbooks/ansible.cfg
configured module search path = ['/Users/ddimatos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ddimatos/.venv/3.8.3/venv/lib/python3.8/site-packages/ansible
ansible collection location = /Users/ddimatos/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/ddimatos/.venv/3.8.3/venv/bin/ansible
python version = 3.8.3 (tags/v3.8.3:6f8c8320e9, Jun 21 2021, 12:25:15) [Clang 11.0.0 (clang-1100.0.33.12)]
jinja version = 2.11.2
libyaml = False
```
### Configuration
```console
DEFAULT_BECOME_ASK_PASS(/Users/ddimatos/git/github/playbooks/ansible.cfg) = True
DEFAULT_FORKS(/Users/ddimatos/git/github/playbooks/ansible.cfg) = 25
```
### OS / Environment
z/OS 2.3
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: zvm
collections:
- ibm.ibm_zos_core
gather_facts: no
environment: "{{ environment_vars }}"
tasks:
- name: Who are we locally before going to the z/OS target
command: "echo $USER"
delegate_to: localhost
register: result
- name: Who are we on the controller?
debug:
var: result
- name: Who are we on the z/OS target before privilege escalation
command: whoami
register: result
- name: Who are we on z/OS?
debug:
var: result
- name: Who are we on the z/OS target with privilege escalation
command: whoami
become: yes
become_method: su
become_user: omvsadm
register: result
- name: Who did we run as
debug:
var: result
```
### Expected Results
I expect the second task to run with privilege escalated to user `omvsadm`.
I expect this result:
(venv) ansible-playbook -i inventory zos-priv-escal-who-am-i.yaml
BECOME password:
PLAY [zvm] **************************************************************************************************************************************************************************
TASK [Who are we locally before going to the z/OS target] ***************************************************************************************************************************
changed: [zvm -> localhost]
TASK [Who are we on the controller?] ************************************************************************************************************************************************
ok: [zvm] => {
"result": {
"changed": true,
"cmd": [
"echo",
"$USER"
],
"delta": "0:00:00.005305",
"end": "2021-06-21 16:05:09.311709",
"failed": false,
"rc": 0,
"start": "2021-06-21 16:05:09.306404",
"stderr": "",
"stderr_lines": [],
"stdout": "ddimatos",
"stdout_lines": [
"ddimatos"
]
}
}
TASK [Who are we on the z/OS target before privilege escalation] ********************************************************************************************************************
changed: [zvm]
TASK [Who are we on z/OS?] **********************************************************************************************************************************************************
ok: [zvm] => {
"result": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.172269",
"end": "2021-06-21 23:05:16.019954",
"failed": false,
"rc": 0,
"start": "2021-06-21 23:05:15.847685",
"stderr": "",
"stderr_lines": [],
"stdout": "USRT001",
"stdout_lines": [
"USRT001"
]
}
}
TASK [Who are we on the z/OS target with privilege escalation] **********************************************************************************************************************
changed: [zvm]
TASK [Who did we run as] ************************************************************************************************************************************************************
ok: [zvm] => {
"result": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:02.928654",
"end": "2021-06-21 23:05:23.135997",
"failed": false,
"rc": 0,
"start": "2021-06-21 23:05:20.207343",
"stderr": "",
"stderr_lines": [],
"stdout": "BPXROOT",
"stdout_lines": [
"BPXROOT"
]
}
}
PLAY RECAP **************************************************************************************************************************************************************************
zvm : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
I received a timeout error (even after I increased `timeout = 60` in `ansible.cfg`. Error received was `fatal: [zvm]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}`.
BECOME password:
PLAY [zvm] **************************************************************************************************************************************************************************
TASK [Who are we locally before going to the z/OS target] ***************************************************************************************************************************
changed: [zvm]
TASK [Who are we on the controller?] ************************************************************************************************************************************************
ok: [zvm] => {
"result": {
"changed": true,
"cmd": [
"echo",
"$USER"
],
"delta": "0:00:00.005427",
"end": "2021-06-21 16:17:10.206437",
"failed": false,
"rc": 0,
"start": "2021-06-21 16:17:10.201010",
"stderr": "",
"stderr_lines": [],
"stdout": "ddimatos",
"stdout_lines": [
"ddimatos"
]
}
}
TASK [Who are we on the z/OS target before privilege escalation] ********************************************************************************************************************
changed: [zvm]
TASK [Who are we on z/OS?] **********************************************************************************************************************************************************
ok: [zvm] => {
"result": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.121340",
"end": "2021-06-21 23:17:16.366350",
"failed": false,
"rc": 0,
"start": "2021-06-21 23:17:16.245010",
"stderr": "",
"stderr_lines": [],
"stdout": "USRT001",
"stdout_lines": [
"USRT001"
]
}
}
TASK [Who are we on the z/OS target with privilege escalation] **********************************************************************************************************************
fatal: [zvm]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
PLAY RECAP **************************************************************************************************************************************************************************
zvm : ok=4 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
ddimatos:[ ~/git/github/playbooks ]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75076
|
https://github.com/ansible/ansible/pull/75084
|
8ab418f41b4251665845420c79ae94ab7909a7e7
|
ac151e5ad0e4ca75cee58f178afee911f7654f43
| 2021-06-21T23:18:51Z |
python
| 2021-07-08T19:55:46Z |
lib/ansible/plugins/become/su.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: su
short_description: Substitute User
description:
- This become plugins allows your remote/login user to execute commands as another user via the su utility.
author: ansible (@core)
version_added: "2.8"
options:
become_user:
description: User you 'become' to execute the task
default: root
ini:
- section: privilege_escalation
key: become_user
- section: su_become_plugin
key: user
vars:
- name: ansible_become_user
- name: ansible_su_user
env:
- name: ANSIBLE_BECOME_USER
- name: ANSIBLE_SU_USER
become_exe:
description: Su executable
default: su
ini:
- section: privilege_escalation
key: become_exe
- section: su_become_plugin
key: executable
vars:
- name: ansible_become_exe
- name: ansible_su_exe
env:
- name: ANSIBLE_BECOME_EXE
- name: ANSIBLE_SU_EXE
become_flags:
description: Options to pass to su
default: ''
ini:
- section: privilege_escalation
key: become_flags
- section: su_become_plugin
key: flags
vars:
- name: ansible_become_flags
- name: ansible_su_flags
env:
- name: ANSIBLE_BECOME_FLAGS
- name: ANSIBLE_SU_FLAGS
become_pass:
description: Password to pass to su
required: False
vars:
- name: ansible_become_password
- name: ansible_become_pass
- name: ansible_su_pass
env:
- name: ANSIBLE_BECOME_PASS
- name: ANSIBLE_SU_PASS
ini:
- section: su_become_plugin
key: password
prompt_l10n:
description:
- List of localized strings to match for prompt detection
- If empty we'll use the built in one
default: []
type: list
ini:
- section: su_become_plugin
key: localized_prompts
vars:
- name: ansible_su_prompt_l10n
env:
- name: ANSIBLE_SU_PROMPT_L10N
"""
import re
from ansible.module_utils._text import to_bytes
from ansible.module_utils.six.moves import shlex_quote
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'su'
# messages for detecting prompted password issues
fail = ('Authentication failure',)
SU_PROMPT_LOCALIZATIONS = [
'Password',
'암호',
'パスワード',
'Adgangskode',
'Contraseña',
'Contrasenya',
'Hasło',
'Heslo',
'Jelszó',
'Lösenord',
'Mật khẩu',
'Mot de passe',
'Parola',
'Parool',
'Pasahitza',
'Passord',
'Passwort',
'Salasana',
'Sandi',
'Senha',
'Wachtwoord',
'ססמה',
'Лозинка',
'Парола',
'Пароль',
'गुप्तशब्द',
'शब्दकूट',
'సంకేతపదము',
'හස්පදය',
'密码',
'密碼',
'口令',
]
def check_password_prompt(self, b_output):
''' checks if the expected password prompt exists in b_output '''
prompts = self.get_option('prompt_l10n') or self.SU_PROMPT_LOCALIZATIONS
b_password_string = b"|".join((br'(\w+\'s )?' + to_bytes(p)) for p in prompts)
# Colon or unicode fullwidth colon
b_password_string = b_password_string + to_bytes(u' ?(:|:) ?')
b_su_prompt_localizations_re = re.compile(b_password_string, flags=re.IGNORECASE)
return bool(b_su_prompt_localizations_re.match(b_output))
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
# Prompt handling for ``su`` is more complicated, this
# is used to satisfy the connection plugin
self.prompt = True
if not cmd:
return cmd
exe = self.get_option('become_exe') or self.name
flags = self.get_option('become_flags') or ''
user = self.get_option('become_user') or ''
success_cmd = self._build_success_command(cmd, shell)
return "%s %s %s -c %s" % (exe, flags, user, shlex_quote(success_cmd))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,336 |
Improve performance of `ansible.template.is_template` with quick check short circuit
|
### Summary
`ansible.template.is_template` tries to loop until it determines if the value is indeed a template. Which means we could loop extremely large strings, that don't even remotely look like templates, before we bail.
However, we could benefit from moving `Templar.is_possibly_template` to the global scope, and testing whether the value after `jinja_env.preprocess` is even possibly a template, before doing more computationally complex validation.
### Issue Type
Feature Idea
### Component Name
lib/ansible/template/__init__.py
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74336
|
https://github.com/ansible/ansible/pull/75092
|
1fc1ab89ae2ab0fb63201942a803ee8b62fe2ece
|
5dfa9bdd9f99d08967b0da8eb29b515b385bef59
| 2021-04-19T20:35:39Z |
python
| 2021-07-19T14:22:38Z |
changelogs/fragments/74336-is_template-quick-check.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,336 |
Improve performance of `ansible.template.is_template` with quick check short circuit
|
### Summary
`ansible.template.is_template` tries to loop until it determines if the value is indeed a template. Which means we could loop extremely large strings, that don't even remotely look like templates, before we bail.
However, we could benefit from moving `Templar.is_possibly_template` to the global scope, and testing whether the value after `jinja_env.preprocess` is even possibly a template, before doing more computationally complex validation.
### Issue Type
Feature Idea
### Component Name
lib/ansible/template/__init__.py
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74336
|
https://github.com/ansible/ansible/pull/75092
|
1fc1ab89ae2ab0fb63201942a803ee8b62fe2ece
|
5dfa9bdd9f99d08967b0da8eb29b515b385bef59
| 2021-04-19T20:35:39Z |
python
| 2021-07-19T14:22:38Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from contextlib import contextmanager
from ansible.module_utils.compat.version import LooseVersion
from numbers import Number
from traceback import format_exc
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import iteritems, string_types, text_type
from ansible.module_utils.six.moves import range
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.safe_eval import safe_eval
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# A regex for checking to see if a variable we're trying to
# expand is just a single variable name.
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
from jinja2 import __version__ as j2_version
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
USE_JINJA2_NATIVE = False
if C.DEFAULT_JINJA2_NATIVE:
try:
from jinja2.nativetypes import NativeEnvironment
from ansible.template.native_helpers import ansible_native_concat
from ansible.utils.native_jinja import NativeJinjaText
USE_JINJA2_NATIVE = True
except ImportError:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
if C.JINJA2_NATIVE_WARNING:
display.warning(
'jinja2_native requires Jinja 2.10 and above. '
'Version detected: %s. Falling back to default.' % j2_version
)
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a filter
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined'
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if LooseVersion(j2_version) >= LooseVersion('2.9'):
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, jinja2_native, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
self._jinja2_native = jinja2_native
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if self._jinja2_native and plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
# didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path
leaf_key = key
key = 'ansible.builtin.' + key
else:
leaf_key = key.split('.')[-1]
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement support for collection-backed redirect (currently only builtin)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect_fqcr = routing_entry.get('redirect', None)
if redirect_fqcr:
acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname)
display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource))
key = redirect_fqcr
# TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
method_map = getattr(plugin_impl, self._method_map_name)
try:
func_items = iteritems(method_map())
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if self._jinja2_native and fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(Environment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
NOTE: Any changes to this class must be reflected in
:class:`AnsibleNativeEnvironment` as well.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader, jinja2_native=False)
self.tests = JinjaPluginIntercept(self.tests, test_loader, jinja2_native=False)
if USE_JINJA2_NATIVE:
class AnsibleNativeEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
NOTE: Any changes to this class must be reflected in
:class:`AnsibleEnvironment` as well.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleNativeEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader, jinja2_native=True)
self.tests = JinjaPluginIntercept(self.tests, test_loader, jinja2_native=True)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._basedir = loader.get_basedir() if loader else './'
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if USE_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['finalize'] = self._finalize
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME these regular expressions should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' %
('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string))
@property
def jinja2_native(self):
return not isinstance(self.environment, AnsibleEnvironment)
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment. The new environment is based on
given environment class and kwargs.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
def set_available_variables(self, variables):
display.deprecated(
'set_available_variables is being deprecated. Use "@available_variables.setter" instead.',
version='2.13', collection_name='ansible.builtin'
)
self.available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
if not self.jinja2_native:
unsafe = hasattr(result, '__UNSAFE__')
if convert_data and not self._no_type_regex.match(variable):
# if this looks like a dictionary or list, convert it to such using the safe_eval method
if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \
result.startswith("[") or result in ("True", "False"):
eval_results = safe_eval(result, include_exceptions=True)
if eval_results[1] is None:
result = eval_results[0]
if unsafe:
result = wrap_var(result)
# FIXME: if the safe_eval raised an error, should we do something with it?
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
'''Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
'''
env = self.environment
if isinstance(data, string_types):
for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string):
if marker in data:
return True
return False
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if _is_rolled(thing):
# Auto unroll a generator, so that users are not required to
# explicitly use ``|list`` to unroll
# This only affects the scenario where the final result of templating
# is a generator, and not where a filter creates a generator in the middle
# of a template. See ``_unroll_iterator`` for the other case. This is probably
# unncessary
return list(thing)
if self.jinja2_native:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if self.jinja2_native and isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
return ran
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
# allows template header overrides to change jinja2 options.
if overrides is None:
myenv = self.environment
else:
myenv = self.environment.overlay(overrides)
# Get jinja env overrides from template
if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE):
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
if self.jinja2_native:
res = ansible_native_concat(rf)
else:
res = j2_concat(rf)
unsafe = getattr(new_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if self.jinja2_native and not isinstance(res, string_types):
return res
if preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# jinja2 added a keep_trailing_newline option in 2.7 when
# creating an Environment. That would let us make this code
# better (remove a single newline if
# preserve_trailing_newlines is False). Once we can depend on
# that version being present, modify our code to set that when
# initializing self.environment and remove a single trailing
# newline here if preserve_newlines is False.
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,279 |
module_defaults can no longer contain `ansible.legacy` modules
|
### Summary
This appears to be related to this change around action_groups: https://github.com/ansible/ansible/pull/74039
Before that, a module in a role could be set in `module_defaults`, but it doesn't work anymore. The action fails to resolve, as it's always trying to associate it with `ansible.builtin` rather than `ansible.legacy`, even when fully qualified.
Consider a role with `library/my_module.py` and a simple `tasks/main.yml`:
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- my_module:
```
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- my_module:
```
All permutations fail with the same error:
> ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
EDIT to add:
I checked out commit 9af0d916768986f50647b11f896a7cdac1550230 (which as far as I can tell is the last commit before the linked PR was merged), and all of the above forms work except the last one, which fails with:
> fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: data"}
---
The changelog entry in that PR mentions:
> Fully qualified 'ansible.legacy' plugin names are not included implicitly in action_groups.
But I'm not sure if that's relevant, as no action groups are in play here. I couldn't find any way to make module defaults work for the module in the role.
### Issue Type
Bug Report
### Component Name
lib/ansible/playbook/base.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0] (devel 1fc1ab89ae) last updated 2021/07/17 12:29:56 (GMT -400)
config file = None
configured module search path = ['/home/briantist/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible
ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections
executable location = /home/briantist/code/ansible/ansible.core/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from
ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
[DEPRECATION WARNING]: ANSIBLE_CALLBACK_WHITELIST option, normalizing names to new standard, use ANSIBLE_CALLBACKS_ENABLED instead. This feature will be removed from ansible-core in version 2.15. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
CALLBACKS_ENABLED(env: ANSIBLE_CALLBACK_WHITELIST) = ['profile_tasks']
DEFAULT_FORKS(env: ANSIBLE_FORKS) = 30
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = ['host_list', 'auto', 'ini', 'yaml', 'toml']
```
### OS / Environment
Ubuntu 18.04 in WSL2
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Described in issue summary
### Expected Results
`module_defaults` work for modules in a role
### Actual Results
```console
ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75279
|
https://github.com/ansible/ansible/pull/75284
|
13c28664ae0817068386b893858f4f6daa702052
|
4d78b58540dafd818a5e75ec390f9f03f5367ed9
| 2021-07-17T17:24:48Z |
python
| 2021-07-21T17:37:52Z |
changelogs/fragments/75284-fix-legacy-module_defaults.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,279 |
module_defaults can no longer contain `ansible.legacy` modules
|
### Summary
This appears to be related to this change around action_groups: https://github.com/ansible/ansible/pull/74039
Before that, a module in a role could be set in `module_defaults`, but it doesn't work anymore. The action fails to resolve, as it's always trying to associate it with `ansible.builtin` rather than `ansible.legacy`, even when fully qualified.
Consider a role with `library/my_module.py` and a simple `tasks/main.yml`:
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- my_module:
```
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- my_module:
```
All permutations fail with the same error:
> ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
EDIT to add:
I checked out commit 9af0d916768986f50647b11f896a7cdac1550230 (which as far as I can tell is the last commit before the linked PR was merged), and all of the above forms work except the last one, which fails with:
> fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: data"}
---
The changelog entry in that PR mentions:
> Fully qualified 'ansible.legacy' plugin names are not included implicitly in action_groups.
But I'm not sure if that's relevant, as no action groups are in play here. I couldn't find any way to make module defaults work for the module in the role.
### Issue Type
Bug Report
### Component Name
lib/ansible/playbook/base.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0] (devel 1fc1ab89ae) last updated 2021/07/17 12:29:56 (GMT -400)
config file = None
configured module search path = ['/home/briantist/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible
ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections
executable location = /home/briantist/code/ansible/ansible.core/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from
ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
[DEPRECATION WARNING]: ANSIBLE_CALLBACK_WHITELIST option, normalizing names to new standard, use ANSIBLE_CALLBACKS_ENABLED instead. This feature will be removed from ansible-core in version 2.15. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
CALLBACKS_ENABLED(env: ANSIBLE_CALLBACK_WHITELIST) = ['profile_tasks']
DEFAULT_FORKS(env: ANSIBLE_FORKS) = 30
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = ['host_list', 'auto', 'ini', 'yaml', 'toml']
```
### OS / Environment
Ubuntu 18.04 in WSL2
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Described in issue summary
### Expected Results
`module_defaults` work for modules in a role
### Actual Results
```console
ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75279
|
https://github.com/ansible/ansible/pull/75284
|
13c28664ae0817068386b893858f4f6daa702052
|
4d78b58540dafd818a5e75ec390f9f03f5367ed9
| 2021-07-17T17:24:48Z |
python
| 2021-07-21T17:37:52Z |
lib/ansible/playbook/base.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import itertools
import operator
from copy import copy as shallowcopy
from functools import partial
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.six import iteritems, string_types, with_metaclass
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_text, to_native
from ansible.parsing.dataloader import DataLoader
from ansible.playbook.attribute import Attribute, FieldAttribute
from ansible.plugins.loader import module_loader, action_loader
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata, AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
from ansible.utils.vars import combine_vars, isidentifier, get_unique_id
display = Display()
def _generic_g(prop_name, self):
try:
value = self._attributes[prop_name]
except KeyError:
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, prop_name))
if value is Sentinel:
value = self._attr_defaults[prop_name]
return value
def _generic_g_method(prop_name, self):
try:
if self._squashed:
return self._attributes[prop_name]
method = "_get_attr_%s" % prop_name
return getattr(self, method)()
except KeyError:
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, prop_name))
def _generic_g_parent(prop_name, self):
try:
if self._squashed or self._finalized:
value = self._attributes[prop_name]
else:
try:
value = self._get_parent_attribute(prop_name)
except AttributeError:
value = self._attributes[prop_name]
except KeyError:
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, prop_name))
if value is Sentinel:
value = self._attr_defaults[prop_name]
return value
def _generic_s(prop_name, self, value):
self._attributes[prop_name] = value
def _generic_d(prop_name, self):
del self._attributes[prop_name]
def _validate_action_group_metadata(action, found_group_metadata, fq_group_name):
valid_metadata = {
'extend_group': {
'types': (list, string_types,),
'errortype': 'list',
},
}
metadata_warnings = []
validate = C.VALIDATE_ACTION_GROUP_METADATA
metadata_only = isinstance(action, dict) and 'metadata' in action and len(action) == 1
if validate and not metadata_only:
found_keys = ', '.join(sorted(list(action)))
metadata_warnings.append("The only expected key is metadata, but got keys: {keys}".format(keys=found_keys))
elif validate:
if found_group_metadata:
metadata_warnings.append("The group contains multiple metadata entries.")
if not isinstance(action['metadata'], dict):
metadata_warnings.append("The metadata is not a dictionary. Got {metadata}".format(metadata=action['metadata']))
else:
unexpected_keys = set(action['metadata'].keys()) - set(valid_metadata.keys())
if unexpected_keys:
metadata_warnings.append("The metadata contains unexpected keys: {0}".format(', '.join(unexpected_keys)))
unexpected_types = []
for field, requirement in valid_metadata.items():
if field not in action['metadata']:
continue
value = action['metadata'][field]
if not isinstance(value, requirement['types']):
unexpected_types.append("%s is %s (expected type %s)" % (field, value, requirement['errortype']))
if unexpected_types:
metadata_warnings.append("The metadata contains unexpected key types: {0}".format(', '.join(unexpected_types)))
if metadata_warnings:
metadata_warnings.insert(0, "Invalid metadata was found for action_group {0} while loading module_defaults.".format(fq_group_name))
display.warning(" ".join(metadata_warnings))
class BaseMeta(type):
"""
Metaclass for the Base object, which is used to construct the class
attributes based on the FieldAttributes available.
"""
def __new__(cls, name, parents, dct):
def _create_attrs(src_dict, dst_dict):
'''
Helper method which creates the attributes based on those in the
source dictionary of attributes. This also populates the other
attributes used to keep track of these attributes and via the
getter/setter/deleter methods.
'''
keys = list(src_dict.keys())
for attr_name in keys:
value = src_dict[attr_name]
if isinstance(value, Attribute):
if attr_name.startswith('_'):
attr_name = attr_name[1:]
# here we selectively assign the getter based on a few
# things, such as whether we have a _get_attr_<name>
# method, or if the attribute is marked as not inheriting
# its value from a parent object
method = "_get_attr_%s" % attr_name
if method in src_dict or method in dst_dict:
getter = partial(_generic_g_method, attr_name)
elif ('_get_parent_attribute' in dst_dict or '_get_parent_attribute' in src_dict) and value.inherit:
getter = partial(_generic_g_parent, attr_name)
else:
getter = partial(_generic_g, attr_name)
setter = partial(_generic_s, attr_name)
deleter = partial(_generic_d, attr_name)
dst_dict[attr_name] = property(getter, setter, deleter)
dst_dict['_valid_attrs'][attr_name] = value
dst_dict['_attributes'][attr_name] = Sentinel
dst_dict['_attr_defaults'][attr_name] = value.default
if value.alias is not None:
dst_dict[value.alias] = property(getter, setter, deleter)
dst_dict['_valid_attrs'][value.alias] = value
dst_dict['_alias_attrs'][value.alias] = attr_name
def _process_parents(parents, dst_dict):
'''
Helper method which creates attributes from all parent objects
recursively on through grandparent objects
'''
for parent in parents:
if hasattr(parent, '__dict__'):
_create_attrs(parent.__dict__, dst_dict)
new_dst_dict = parent.__dict__.copy()
new_dst_dict.update(dst_dict)
_process_parents(parent.__bases__, new_dst_dict)
# create some additional class attributes
dct['_attributes'] = {}
dct['_attr_defaults'] = {}
dct['_valid_attrs'] = {}
dct['_alias_attrs'] = {}
# now create the attributes based on the FieldAttributes
# available, including from parent (and grandparent) objects
_create_attrs(dct, dct)
_process_parents(parents, dct)
return super(BaseMeta, cls).__new__(cls, name, parents, dct)
class FieldAttributeBase(with_metaclass(BaseMeta, object)):
def __init__(self):
# initialize the data loader and variable manager, which will be provided
# later when the object is actually loaded
self._loader = None
self._variable_manager = None
# other internal params
self._validated = False
self._squashed = False
self._finalized = False
# every object gets a random uuid:
self._uuid = get_unique_id()
# we create a copy of the attributes here due to the fact that
# it was initialized as a class param in the meta class, so we
# need a unique object here (all members contained within are
# unique already).
self._attributes = self.__class__._attributes.copy()
self._attr_defaults = self.__class__._attr_defaults.copy()
for key, value in self._attr_defaults.items():
if callable(value):
self._attr_defaults[key] = value()
# and init vars, avoid using defaults in field declaration as it lives across plays
self.vars = dict()
@property
def finalized(self):
return self._finalized
def dump_me(self, depth=0):
''' this is never called from production code, it is here to be used when debugging as a 'complex print' '''
if depth == 0:
display.debug("DUMPING OBJECT ------------------------------------------------------")
display.debug("%s- %s (%s, id=%s)" % (" " * depth, self.__class__.__name__, self, id(self)))
if hasattr(self, '_parent') and self._parent:
self._parent.dump_me(depth + 2)
dep_chain = self._parent.get_dep_chain()
if dep_chain:
for dep in dep_chain:
dep.dump_me(depth + 2)
if hasattr(self, '_play') and self._play:
self._play.dump_me(depth + 2)
def preprocess_data(self, ds):
''' infrequently used method to do some pre-processing of legacy terms '''
return ds
def load_data(self, ds, variable_manager=None, loader=None):
''' walk the input datastructure and assign any values '''
if ds is None:
raise AnsibleAssertionError('ds (%s) should not be None but it is.' % ds)
# cache the datastructure internally
setattr(self, '_ds', ds)
# the variable manager class is used to manage and merge variables
# down to a single dictionary for reference in templating, etc.
self._variable_manager = variable_manager
# the data loader class is used to parse data from strings and files
if loader is not None:
self._loader = loader
else:
self._loader = DataLoader()
# call the preprocess_data() function to massage the data into
# something we can more easily parse, and then call the validation
# function on it to ensure there are no incorrect key values
ds = self.preprocess_data(ds)
self._validate_attributes(ds)
# Walk all attributes in the class. We sort them based on their priority
# so that certain fields can be loaded before others, if they are dependent.
for name, attr in sorted(iteritems(self._valid_attrs), key=operator.itemgetter(1)):
# copy the value over unless a _load_field method is defined
target_name = name
if name in self._alias_attrs:
target_name = self._alias_attrs[name]
if name in ds:
method = getattr(self, '_load_%s' % name, None)
if method:
self._attributes[target_name] = method(name, ds[name])
else:
self._attributes[target_name] = ds[name]
# run early, non-critical validation
self.validate()
# return the constructed object
return self
def get_ds(self):
try:
return getattr(self, '_ds')
except AttributeError:
return None
def get_loader(self):
return self._loader
def get_variable_manager(self):
return self._variable_manager
def _post_validate_debugger(self, attr, value, templar):
value = templar.template(value)
valid_values = frozenset(('always', 'on_failed', 'on_unreachable', 'on_skipped', 'never'))
if value and isinstance(value, string_types) and value not in valid_values:
raise AnsibleParserError("'%s' is not a valid value for debugger. Must be one of %s" % (value, ', '.join(valid_values)), obj=self.get_ds())
return value
def _validate_attributes(self, ds):
'''
Ensures that there are no keys in the datastructure which do
not map to attributes for this object.
'''
valid_attrs = frozenset(self._valid_attrs.keys())
for key in ds:
if key not in valid_attrs:
raise AnsibleParserError("'%s' is not a valid attribute for a %s" % (key, self.__class__.__name__), obj=ds)
def validate(self, all_vars=None):
''' validation that is done at parse time, not load time '''
all_vars = {} if all_vars is None else all_vars
if not self._validated:
# walk all fields in the object
for (name, attribute) in iteritems(self._valid_attrs):
if name in self._alias_attrs:
name = self._alias_attrs[name]
# run validator only if present
method = getattr(self, '_validate_%s' % name, None)
if method:
method(attribute, name, getattr(self, name))
else:
# and make sure the attribute is of the type it should be
value = self._attributes[name]
if value is not None:
if attribute.isa == 'string' and isinstance(value, (list, dict)):
raise AnsibleParserError(
"The field '%s' is supposed to be a string type,"
" however the incoming data structure is a %s" % (name, type(value)), obj=self.get_ds()
)
self._validated = True
def _load_module_defaults(self, name, value):
if value is None:
return
if not isinstance(value, list):
value = [value]
validated_module_defaults = []
for defaults_dict in value:
if not isinstance(defaults_dict, dict):
raise AnsibleParserError(
"The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, "
"the keys of which must be static action, module, or group names. Only the values may contain "
"templates. For example: {'ping': \"{{ ping_defaults }}\"}"
)
validated_defaults_dict = {}
for defaults_entry, defaults in defaults_dict.items():
# module_defaults do not use the 'collections' keyword, so actions and
# action_groups that are not fully qualified are part of the 'ansible.legacy'
# collection. Update those entries here, so module_defaults contains
# fully qualified entries.
if defaults_entry.startswith('group/'):
group_name = defaults_entry.split('group/')[-1]
# The resolved action_groups cache is associated saved on the current Play
if self.play is not None:
group_name, dummy = self._resolve_group(group_name)
defaults_entry = 'group/' + group_name
validated_defaults_dict[defaults_entry] = defaults
else:
action_names = []
if len(defaults_entry.split('.')) < 3:
defaults_entry = 'ansible.legacy.' + defaults_entry
action_names.append(defaults_entry)
if defaults_entry.startswith('ansible.legacy.'):
action_names.append(defaults_entry.replace('ansible.legacy.', 'ansible.builtin.'))
# Replace the module_defaults action entry with the canonical name,
# so regardless of how the action is called, the defaults will apply
for action_name in action_names:
resolved_action = self._resolve_action(action_name)
if resolved_action:
validated_defaults_dict[resolved_action] = defaults
validated_module_defaults.append(validated_defaults_dict)
return validated_module_defaults
@property
def play(self):
if hasattr(self, '_play'):
play = self._play
elif hasattr(self, '_parent') and hasattr(self._parent, '_play'):
play = self._parent._play
else:
play = self
if play.__class__.__name__ != 'Play':
# Should never happen, but handle gracefully by returning None, just in case
return None
return play
def _resolve_group(self, fq_group_name, mandatory=True):
if not AnsibleCollectionRef.is_valid_fqcr(fq_group_name):
collection_name = 'ansible.builtin'
fq_group_name = collection_name + '.' + fq_group_name
else:
collection_name = '.'.join(fq_group_name.split('.')[0:2])
# Check if the group has already been resolved and cached
if fq_group_name in self.play._group_actions:
return fq_group_name, self.play._group_actions[fq_group_name]
try:
action_groups = _get_collection_metadata(collection_name).get('action_groups', {})
except ValueError:
if not mandatory:
display.vvvvv("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
return fq_group_name, []
raise AnsibleParserError("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
# The collection may or may not use the fully qualified name
# Don't fail if the group doesn't exist in the collection
resource_name = fq_group_name.split(collection_name + '.')[-1]
action_group = action_groups.get(
fq_group_name,
action_groups.get(resource_name)
)
if action_group is None:
if not mandatory:
display.vvvvv("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
return fq_group_name, []
raise AnsibleParserError("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
resolved_actions = []
include_groups = []
found_group_metadata = False
for action in action_group:
# Everything should be a string except the metadata entry
if not isinstance(action, string_types):
_validate_action_group_metadata(action, found_group_metadata, fq_group_name)
if isinstance(action['metadata'], dict):
found_group_metadata = True
include_groups = action['metadata'].get('extend_group', [])
if isinstance(include_groups, string_types):
include_groups = [include_groups]
if not isinstance(include_groups, list):
# Bad entries may be a warning above, but prevent tracebacks by setting it back to the acceptable type.
include_groups = []
continue
# The collection may or may not use the fully qualified name.
# If not, it's part of the current collection.
if not AnsibleCollectionRef.is_valid_fqcr(action):
action = collection_name + '.' + action
resolved_action = self._resolve_action(action, mandatory=False)
if resolved_action:
resolved_actions.append(resolved_action)
for action in resolved_actions:
if action not in self.play._action_groups:
self.play._action_groups[action] = []
self.play._action_groups[action].append(fq_group_name)
self.play._group_actions[fq_group_name] = resolved_actions
# Resolve extended groups last, after caching the group in case they recursively refer to each other
for include_group in include_groups:
if not AnsibleCollectionRef.is_valid_fqcr(include_group):
include_group_collection = collection_name
include_group = collection_name + '.' + include_group
else:
include_group_collection = '.'.join(include_group.split('.')[0:2])
dummy, group_actions = self._resolve_group(include_group, mandatory=False)
for action in group_actions:
if action not in self.play._action_groups:
self.play._action_groups[action] = []
self.play._action_groups[action].append(fq_group_name)
self.play._group_actions[fq_group_name].extend(group_actions)
resolved_actions.extend(group_actions)
return fq_group_name, resolved_actions
def _resolve_action(self, action_name, mandatory=True):
context = action_loader.find_plugin_with_context(action_name)
if not context.resolved:
context = module_loader.find_plugin_with_context(action_name)
if context.resolved:
return context.resolved_fqcn
if mandatory:
raise AnsibleParserError("Could not resolve action %s in module_defaults" % action_name)
display.vvvvv("Could not resolve action %s in module_defaults" % action_name)
def squash(self):
'''
Evaluates all attributes and sets them to the evaluated version,
so that all future accesses of attributes do not need to evaluate
parent attributes.
'''
if not self._squashed:
for name in self._valid_attrs.keys():
self._attributes[name] = getattr(self, name)
self._squashed = True
def copy(self):
'''
Create a copy of this object and return it.
'''
try:
new_me = self.__class__()
except RuntimeError as e:
raise AnsibleError("Exceeded maximum object depth. This may have been caused by excessive role recursion", orig_exc=e)
for name in self._valid_attrs.keys():
if name in self._alias_attrs:
continue
new_me._attributes[name] = shallowcopy(self._attributes[name])
new_me._attr_defaults[name] = shallowcopy(self._attr_defaults[name])
new_me._loader = self._loader
new_me._variable_manager = self._variable_manager
new_me._validated = self._validated
new_me._finalized = self._finalized
new_me._uuid = self._uuid
# if the ds value was set on the object, copy it to the new copy too
if hasattr(self, '_ds'):
new_me._ds = self._ds
return new_me
def get_validated_value(self, name, attribute, value, templar):
if attribute.isa == 'string':
value = to_text(value)
elif attribute.isa == 'int':
value = int(value)
elif attribute.isa == 'float':
value = float(value)
elif attribute.isa == 'bool':
value = boolean(value, strict=True)
elif attribute.isa == 'percent':
# special value, which may be an integer or float
# with an optional '%' at the end
if isinstance(value, string_types) and '%' in value:
value = value.replace('%', '')
value = float(value)
elif attribute.isa == 'list':
if value is None:
value = []
elif not isinstance(value, list):
value = [value]
if attribute.listof is not None:
for item in value:
if not isinstance(item, attribute.listof):
raise AnsibleParserError("the field '%s' should be a list of %s, "
"but the item '%s' is a %s" % (name, attribute.listof, item, type(item)), obj=self.get_ds())
elif attribute.required and attribute.listof == string_types:
if item is None or item.strip() == "":
raise AnsibleParserError("the field '%s' is required, and cannot have empty values" % (name,), obj=self.get_ds())
elif attribute.isa == 'set':
if value is None:
value = set()
elif not isinstance(value, (list, set)):
if isinstance(value, string_types):
value = value.split(',')
else:
# Making a list like this handles strings of
# text and bytes properly
value = [value]
if not isinstance(value, set):
value = set(value)
elif attribute.isa == 'dict':
if value is None:
value = dict()
elif not isinstance(value, dict):
raise TypeError("%s is not a dictionary" % value)
elif attribute.isa == 'class':
if not isinstance(value, attribute.class_type):
raise TypeError("%s is not a valid %s (got a %s instead)" % (name, attribute.class_type, type(value)))
value.post_validate(templar=templar)
return value
def post_validate(self, templar):
'''
we can't tell that everything is of the right type until we have
all the variables. Run basic types (from isa) as well as
any _post_validate_<foo> functions.
'''
# save the omit value for later checking
omit_value = templar.available_variables.get('omit')
for (name, attribute) in iteritems(self._valid_attrs):
if attribute.static:
value = getattr(self, name)
# we don't template 'vars' but allow template as values for later use
if name not in ('vars',) and templar.is_template(value):
display.warning('"%s" is not templatable, but we found: %s, '
'it will not be templated and will be used "as is".' % (name, value))
continue
if getattr(self, name) is None:
if not attribute.required:
continue
else:
raise AnsibleParserError("the field '%s' is required but was not set" % name)
elif not attribute.always_post_validate and self.__class__.__name__ not in ('Task', 'Handler', 'PlayContext'):
# Intermediate objects like Play() won't have their fields validated by
# default, as their values are often inherited by other objects and validated
# later, so we don't want them to fail out early
continue
try:
# Run the post-validator if present. These methods are responsible for
# using the given templar to template the values, if required.
method = getattr(self, '_post_validate_%s' % name, None)
if method:
value = method(attribute, getattr(self, name), templar)
elif attribute.isa == 'class':
value = getattr(self, name)
else:
# if the attribute contains a variable, template it now
value = templar.template(getattr(self, name))
# if this evaluated to the omit value, set the value back to
# the default specified in the FieldAttribute and move on
if omit_value is not None and value == omit_value:
if callable(attribute.default):
setattr(self, name, attribute.default())
else:
setattr(self, name, attribute.default)
continue
# and make sure the attribute is of the type it should be
if value is not None:
value = self.get_validated_value(name, attribute, value, templar)
# and assign the massaged value back to the attribute field
setattr(self, name, value)
except (TypeError, ValueError) as e:
value = getattr(self, name)
raise AnsibleParserError("the field '%s' has an invalid value (%s), and could not be converted to an %s."
"The error was: %s" % (name, value, attribute.isa, e), obj=self.get_ds(), orig_exc=e)
except (AnsibleUndefinedVariable, UndefinedError) as e:
if templar._fail_on_undefined_errors and name != 'name':
if name == 'args':
msg = "The task includes an option with an undefined variable. The error was: %s" % (to_native(e))
else:
msg = "The field '%s' has an invalid value, which includes an undefined variable. The error was: %s" % (name, to_native(e))
raise AnsibleParserError(msg, obj=self.get_ds(), orig_exc=e)
self._finalized = True
def _load_vars(self, attr, ds):
'''
Vars in a play can be specified either as a dictionary directly, or
as a list of dictionaries. If the later, this method will turn the
list into a single dictionary.
'''
def _validate_variable_keys(ds):
for key in ds:
if not isidentifier(key):
raise TypeError("'%s' is not a valid variable name" % key)
try:
if isinstance(ds, dict):
_validate_variable_keys(ds)
return combine_vars(self.vars, ds)
elif isinstance(ds, list):
all_vars = self.vars
for item in ds:
if not isinstance(item, dict):
raise ValueError
_validate_variable_keys(item)
all_vars = combine_vars(all_vars, item)
return all_vars
elif ds is None:
return {}
else:
raise ValueError
except ValueError as e:
raise AnsibleParserError("Vars in a %s must be specified as a dictionary, or a list of dictionaries" % self.__class__.__name__,
obj=ds, orig_exc=e)
except TypeError as e:
raise AnsibleParserError("Invalid variable name in vars specified for %s: %s" % (self.__class__.__name__, e), obj=ds, orig_exc=e)
def _extend_value(self, value, new_value, prepend=False):
'''
Will extend the value given with new_value (and will turn both
into lists if they are not so already). The values are run through
a set to remove duplicate values.
'''
if not isinstance(value, list):
value = [value]
if not isinstance(new_value, list):
new_value = [new_value]
# Due to where _extend_value may run for some attributes
# it is possible to end up with Sentinel in the list of values
# ensure we strip them
value = [v for v in value if v is not Sentinel]
new_value = [v for v in new_value if v is not Sentinel]
if prepend:
combined = new_value + value
else:
combined = value + new_value
return [i for i, _ in itertools.groupby(combined) if i is not None]
def dump_attrs(self):
'''
Dumps all attributes to a dictionary
'''
attrs = {}
for (name, attribute) in iteritems(self._valid_attrs):
attr = getattr(self, name)
if attribute.isa == 'class' and hasattr(attr, 'serialize'):
attrs[name] = attr.serialize()
else:
attrs[name] = attr
return attrs
def from_attrs(self, attrs):
'''
Loads attributes from a dictionary
'''
for (attr, value) in iteritems(attrs):
if attr in self._valid_attrs:
attribute = self._valid_attrs[attr]
if attribute.isa == 'class' and isinstance(value, dict):
obj = attribute.class_type()
obj.deserialize(value)
setattr(self, attr, obj)
else:
setattr(self, attr, value)
# from_attrs is only used to create a finalized task
# from attrs from the Worker/TaskExecutor
# Those attrs are finalized and squashed in the TE
# and controller side use needs to reflect that
self._finalized = True
self._squashed = True
def serialize(self):
'''
Serializes the object derived from the base object into
a dictionary of values. This only serializes the field
attributes for the object, so this may need to be overridden
for any classes which wish to add additional items not stored
as field attributes.
'''
repr = self.dump_attrs()
# serialize the uuid field
repr['uuid'] = self._uuid
repr['finalized'] = self._finalized
repr['squashed'] = self._squashed
return repr
def deserialize(self, data):
'''
Given a dictionary of values, load up the field attributes for
this object. As with serialize(), if there are any non-field
attribute data members, this method will need to be overridden
and extended.
'''
if not isinstance(data, dict):
raise AnsibleAssertionError('data (%s) should be a dict but is a %s' % (data, type(data)))
for (name, attribute) in iteritems(self._valid_attrs):
if name in data:
setattr(self, name, data[name])
else:
if callable(attribute.default):
setattr(self, name, attribute.default())
else:
setattr(self, name, attribute.default)
# restore the UUID field
setattr(self, '_uuid', data.get('uuid'))
self._finalized = data.get('finalized', False)
self._squashed = data.get('squashed', False)
class Base(FieldAttributeBase):
_name = FieldAttribute(isa='string', default='', always_post_validate=True, inherit=False)
# connection/transport
_connection = FieldAttribute(isa='string', default=context.cliargs_deferred_get('connection'))
_port = FieldAttribute(isa='int')
_remote_user = FieldAttribute(isa='string', default=context.cliargs_deferred_get('remote_user'))
# variables
_vars = FieldAttribute(isa='dict', priority=100, inherit=False, static=True)
# module default params
_module_defaults = FieldAttribute(isa='list', extend=True, prepend=True)
# flags and misc. settings
_environment = FieldAttribute(isa='list', extend=True, prepend=True)
_no_log = FieldAttribute(isa='bool')
_run_once = FieldAttribute(isa='bool')
_ignore_errors = FieldAttribute(isa='bool')
_ignore_unreachable = FieldAttribute(isa='bool')
_check_mode = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('check'))
_diff = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('diff'))
_any_errors_fatal = FieldAttribute(isa='bool', default=C.ANY_ERRORS_FATAL)
_throttle = FieldAttribute(isa='int', default=0)
_timeout = FieldAttribute(isa='int', default=C.TASK_TIMEOUT)
# explicitly invoke a debugger on tasks
_debugger = FieldAttribute(isa='string')
# Privilege escalation
_become = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('become'))
_become_method = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_method'))
_become_user = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_user'))
_become_flags = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_flags'))
_become_exe = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_exe'))
# used to hold sudo/su stuff
DEPRECATED_ATTRIBUTES = []
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,279 |
module_defaults can no longer contain `ansible.legacy` modules
|
### Summary
This appears to be related to this change around action_groups: https://github.com/ansible/ansible/pull/74039
Before that, a module in a role could be set in `module_defaults`, but it doesn't work anymore. The action fails to resolve, as it's always trying to associate it with `ansible.builtin` rather than `ansible.legacy`, even when fully qualified.
Consider a role with `library/my_module.py` and a simple `tasks/main.yml`:
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- my_module:
```
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- my_module:
```
All permutations fail with the same error:
> ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
EDIT to add:
I checked out commit 9af0d916768986f50647b11f896a7cdac1550230 (which as far as I can tell is the last commit before the linked PR was merged), and all of the above forms work except the last one, which fails with:
> fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: data"}
---
The changelog entry in that PR mentions:
> Fully qualified 'ansible.legacy' plugin names are not included implicitly in action_groups.
But I'm not sure if that's relevant, as no action groups are in play here. I couldn't find any way to make module defaults work for the module in the role.
### Issue Type
Bug Report
### Component Name
lib/ansible/playbook/base.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0] (devel 1fc1ab89ae) last updated 2021/07/17 12:29:56 (GMT -400)
config file = None
configured module search path = ['/home/briantist/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible
ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections
executable location = /home/briantist/code/ansible/ansible.core/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from
ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
[DEPRECATION WARNING]: ANSIBLE_CALLBACK_WHITELIST option, normalizing names to new standard, use ANSIBLE_CALLBACKS_ENABLED instead. This feature will be removed from ansible-core in version 2.15. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
CALLBACKS_ENABLED(env: ANSIBLE_CALLBACK_WHITELIST) = ['profile_tasks']
DEFAULT_FORKS(env: ANSIBLE_FORKS) = 30
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = ['host_list', 'auto', 'ini', 'yaml', 'toml']
```
### OS / Environment
Ubuntu 18.04 in WSL2
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Described in issue summary
### Expected Results
`module_defaults` work for modules in a role
### Actual Results
```console
ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75279
|
https://github.com/ansible/ansible/pull/75284
|
13c28664ae0817068386b893858f4f6daa702052
|
4d78b58540dafd818a5e75ec390f9f03f5367ed9
| 2021-07-17T17:24:48Z |
python
| 2021-07-21T17:37:52Z |
test/integration/targets/module_defaults/action_plugins/debug.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,279 |
module_defaults can no longer contain `ansible.legacy` modules
|
### Summary
This appears to be related to this change around action_groups: https://github.com/ansible/ansible/pull/74039
Before that, a module in a role could be set in `module_defaults`, but it doesn't work anymore. The action fails to resolve, as it's always trying to associate it with `ansible.builtin` rather than `ansible.legacy`, even when fully qualified.
Consider a role with `library/my_module.py` and a simple `tasks/main.yml`:
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- my_module:
```
```yaml
- name: Try defaults
module_defaults:
my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- ansible.legacy.my_module:
```
```yaml
- name: Try defaults
module_defaults:
ansible.legacy.my_module:
data: hello
block:
- my_module:
```
All permutations fail with the same error:
> ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
EDIT to add:
I checked out commit 9af0d916768986f50647b11f896a7cdac1550230 (which as far as I can tell is the last commit before the linked PR was merged), and all of the above forms work except the last one, which fails with:
> fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: data"}
---
The changelog entry in that PR mentions:
> Fully qualified 'ansible.legacy' plugin names are not included implicitly in action_groups.
But I'm not sure if that's relevant, as no action groups are in play here. I couldn't find any way to make module defaults work for the module in the role.
### Issue Type
Bug Report
### Component Name
lib/ansible/playbook/base.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0] (devel 1fc1ab89ae) last updated 2021/07/17 12:29:56 (GMT -400)
config file = None
configured module search path = ['/home/briantist/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible
ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections
executable location = /home/briantist/code/ansible/ansible.core/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from
ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
[DEPRECATION WARNING]: ANSIBLE_CALLBACK_WHITELIST option, normalizing names to new standard, use ANSIBLE_CALLBACKS_ENABLED instead. This feature will be removed from ansible-core in version 2.15. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
CALLBACKS_ENABLED(env: ANSIBLE_CALLBACK_WHITELIST) = ['profile_tasks']
DEFAULT_FORKS(env: ANSIBLE_FORKS) = 30
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = ['host_list', 'auto', 'ini', 'yaml', 'toml']
```
### OS / Environment
Ubuntu 18.04 in WSL2
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Described in issue summary
### Expected Results
`module_defaults` work for modules in a role
### Actual Results
```console
ERROR! Could not resolve action ansible.builtin.my_module in module_defaults
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75279
|
https://github.com/ansible/ansible/pull/75284
|
13c28664ae0817068386b893858f4f6daa702052
|
4d78b58540dafd818a5e75ec390f9f03f5367ed9
| 2021-07-17T17:24:48Z |
python
| 2021-07-21T17:37:52Z |
test/integration/targets/module_defaults/test_defaults.yml
|
- hosts: localhost
gather_facts: no
collections:
- testns.testcoll
- testns.othercoll
module_defaults:
testns.testcoll.echoaction:
explicit_module_default: from playbook
testns.testcoll.echo1:
explicit_module_default: from playbook
group/testgroup:
group_module_default: from playbook
tasks:
- testns.testcoll.echoaction:
task_arg: from task
register: echoaction_fq
- echoaction:
task_arg: from task
register: echoaction_unq
- testns.testcoll.echo1:
task_arg: from task
register: echo1_fq
- echo1:
task_arg: from task
register: echo1_unq
- testns.testcoll.echo2:
task_arg: from task
register: echo2_fq
- echo2:
task_arg: from task
register: echo2_unq
- testns.othercoll.other_echoaction:
task_arg: from task
register: other_echoaction_fq
- other_echoaction:
task_arg: from task
register: other_echoaction_unq
- testns.othercoll.other_echo1:
task_arg: from task
register: other_echo1_fq
- other_echo1:
task_arg: from task
register: other_echo1_unq
- debug: var=echo1_fq
- assert:
that:
- "echoaction_fq.args_in == {'task_arg': 'from task', 'explicit_module_default': 'from playbook', 'group_module_default': 'from playbook' }"
- "echoaction_unq.args_in == {'task_arg': 'from task', 'explicit_module_default': 'from playbook', 'group_module_default': 'from playbook' }"
- "echo1_fq.args_in == {'task_arg': 'from task', 'explicit_module_default': 'from playbook', 'group_module_default': 'from playbook' }"
- "echo1_unq.args_in == {'task_arg': 'from task', 'explicit_module_default': 'from playbook', 'group_module_default': 'from playbook' }"
- "echo2_fq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- "echo2_unq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- "other_echoaction_fq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- "other_echoaction_unq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- "other_echo1_fq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- "other_echo1_unq.args_in == {'task_arg': 'from task', 'group_module_default': 'from playbook' }"
- include_tasks: tasks/main.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,275 |
variable_start_string in template affects how jinja is parsing variables outside of template
|
### Summary
I'm rendering template with special header to change the variable/block start/end string:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
```
When rendering this template, some host variables aren't rendered correctly, because the escape sequence override also applies to how these variables are parsed.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/toolset/lib/python3.8/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/toolset/bin/ansible
python version = 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
```
### OS / Environment
For this I'm using `quay.io/ansible/toolset` docker image. (running in CI) but I was able to reproduce it on Ubuntu the same way.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#playbook.yml
---
- name: Reproduce issue
hosts: localhost
vars:
var_a: "value"
var_b: "{{ var_a }}"
var_c: "<< var_a >>"
gather_facts: false
tasks:
- set_fact:
var_d: "{{ var_a }}"
- template:
dest: ./rendered.txt
src: ./template.j2
```
Template:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
Here should be var_a: << var_a >>
Here should be var_b: << var_b >>
Here should be var_c: << var_c >>
Here should be var_d: << var_d >>
```
### Expected Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: value
Here should be var_c: << var_a >>
Here should be var_d: value
```
### Actual Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: {{ var_a }}
Here should be var_c: value
Here should be var_d: value
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75275
|
https://github.com/ansible/ansible/pull/75306
|
e5a2fe4c419740e9a709b07b064063f54277b983
|
767b2f07b00be12b9366655095cf24120d35092e
| 2021-07-16T14:35:07Z |
python
| 2021-07-26T14:38:41Z |
changelogs/fragments/75275-ensure-jinja2-header-overrides-used.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,275 |
variable_start_string in template affects how jinja is parsing variables outside of template
|
### Summary
I'm rendering template with special header to change the variable/block start/end string:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
```
When rendering this template, some host variables aren't rendered correctly, because the escape sequence override also applies to how these variables are parsed.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/toolset/lib/python3.8/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/toolset/bin/ansible
python version = 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
```
### OS / Environment
For this I'm using `quay.io/ansible/toolset` docker image. (running in CI) but I was able to reproduce it on Ubuntu the same way.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#playbook.yml
---
- name: Reproduce issue
hosts: localhost
vars:
var_a: "value"
var_b: "{{ var_a }}"
var_c: "<< var_a >>"
gather_facts: false
tasks:
- set_fact:
var_d: "{{ var_a }}"
- template:
dest: ./rendered.txt
src: ./template.j2
```
Template:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
Here should be var_a: << var_a >>
Here should be var_b: << var_b >>
Here should be var_c: << var_c >>
Here should be var_d: << var_d >>
```
### Expected Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: value
Here should be var_c: << var_a >>
Here should be var_d: value
```
### Actual Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: {{ var_a }}
Here should be var_c: value
Here should be var_d: value
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75275
|
https://github.com/ansible/ansible/pull/75306
|
e5a2fe4c419740e9a709b07b064063f54277b983
|
767b2f07b00be12b9366655095cf24120d35092e
| 2021-07-16T14:35:07Z |
python
| 2021-07-26T14:38:41Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from contextlib import contextmanager
from ansible.module_utils.compat.version import LooseVersion
from numbers import Number
from traceback import format_exc
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import iteritems, string_types, text_type
from ansible.module_utils.six.moves import range
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.safe_eval import safe_eval
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# A regex for checking to see if a variable we're trying to
# expand is just a single variable name.
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
from jinja2 import __version__ as j2_version
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
USE_JINJA2_NATIVE = False
if C.DEFAULT_JINJA2_NATIVE:
try:
from jinja2.nativetypes import NativeEnvironment
from ansible.template.native_helpers import ansible_native_concat
from ansible.utils.native_jinja import NativeJinjaText
USE_JINJA2_NATIVE = True
except ImportError:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
if C.JINJA2_NATIVE_WARNING:
display.warning(
'jinja2_native requires Jinja 2.10 and above. '
'Version detected: %s. Falling back to default.' % j2_version
)
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a filter
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined'
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if LooseVersion(j2_version) >= LooseVersion('2.9'):
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, jinja2_native, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
self._jinja2_native = jinja2_native
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if self._jinja2_native and plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
# didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path
leaf_key = key
key = 'ansible.builtin.' + key
else:
leaf_key = key.split('.')[-1]
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement support for collection-backed redirect (currently only builtin)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect_fqcr = routing_entry.get('redirect', None)
if redirect_fqcr:
acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname)
display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource))
key = redirect_fqcr
# TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
method_map = getattr(plugin_impl, self._method_map_name)
try:
func_items = iteritems(method_map())
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if self._jinja2_native and fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(Environment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
NOTE: Any changes to this class must be reflected in
:class:`AnsibleNativeEnvironment` as well.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader, jinja2_native=False)
self.tests = JinjaPluginIntercept(self.tests, test_loader, jinja2_native=False)
if USE_JINJA2_NATIVE:
class AnsibleNativeEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
NOTE: Any changes to this class must be reflected in
:class:`AnsibleEnvironment` as well.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleNativeEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader, jinja2_native=True)
self.tests = JinjaPluginIntercept(self.tests, test_loader, jinja2_native=True)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._basedir = loader.get_basedir() if loader else './'
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if USE_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['finalize'] = self._finalize
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME these regular expressions should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' %
('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string))
@property
def jinja2_native(self):
return not isinstance(self.environment, AnsibleEnvironment)
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment. The new environment is based on
given environment class and kwargs.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
def set_available_variables(self, variables):
display.deprecated(
'set_available_variables is being deprecated. Use "@available_variables.setter" instead.',
version='2.13', collection_name='ansible.builtin'
)
self.available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
if not self.jinja2_native:
unsafe = hasattr(result, '__UNSAFE__')
if convert_data and not self._no_type_regex.match(variable):
# if this looks like a dictionary or list, convert it to such using the safe_eval method
if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \
result.startswith("[") or result in ("True", "False"):
eval_results = safe_eval(result, include_exceptions=True)
if eval_results[1] is None:
result = eval_results[0]
if unsafe:
result = wrap_var(result)
# FIXME: if the safe_eval raised an error, should we do something with it?
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if _is_rolled(thing):
# Auto unroll a generator, so that users are not required to
# explicitly use ``|list`` to unroll
# This only affects the scenario where the final result of templating
# is a generator, and not where a filter creates a generator in the middle
# of a template. See ``_unroll_iterator`` for the other case. This is probably
# unncessary
return list(thing)
if self.jinja2_native:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if self.jinja2_native and isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
return ran
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
# allows template header overrides to change jinja2 options.
if overrides is None:
myenv = self.environment
else:
myenv = self.environment.overlay(overrides)
# Get jinja env overrides from template
if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE):
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
if self.jinja2_native:
res = ansible_native_concat(rf)
else:
res = j2_concat(rf)
unsafe = getattr(new_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if self.jinja2_native and not isinstance(res, string_types):
return res
if preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# jinja2 added a keep_trailing_newline option in 2.7 when
# creating an Environment. That would let us make this code
# better (remove a single newline if
# preserve_trailing_newlines is False). Once we can depend on
# that version being present, modify our code to set that when
# initializing self.environment and remove a single trailing
# newline here if preserve_newlines is False.
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,275 |
variable_start_string in template affects how jinja is parsing variables outside of template
|
### Summary
I'm rendering template with special header to change the variable/block start/end string:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
```
When rendering this template, some host variables aren't rendered correctly, because the escape sequence override also applies to how these variables are parsed.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/toolset/lib/python3.8/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/toolset/bin/ansible
python version = 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
```
### OS / Environment
For this I'm using `quay.io/ansible/toolset` docker image. (running in CI) but I was able to reproduce it on Ubuntu the same way.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#playbook.yml
---
- name: Reproduce issue
hosts: localhost
vars:
var_a: "value"
var_b: "{{ var_a }}"
var_c: "<< var_a >>"
gather_facts: false
tasks:
- set_fact:
var_d: "{{ var_a }}"
- template:
dest: ./rendered.txt
src: ./template.j2
```
Template:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
Here should be var_a: << var_a >>
Here should be var_b: << var_b >>
Here should be var_c: << var_c >>
Here should be var_d: << var_d >>
```
### Expected Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: value
Here should be var_c: << var_a >>
Here should be var_d: value
```
### Actual Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: {{ var_a }}
Here should be var_c: value
Here should be var_d: value
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75275
|
https://github.com/ansible/ansible/pull/75306
|
e5a2fe4c419740e9a709b07b064063f54277b983
|
767b2f07b00be12b9366655095cf24120d35092e
| 2021-07-16T14:35:07Z |
python
| 2021-07-26T14:38:41Z |
test/integration/targets/template/in_template_overrides.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,275 |
variable_start_string in template affects how jinja is parsing variables outside of template
|
### Summary
I'm rendering template with special header to change the variable/block start/end string:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
```
When rendering this template, some host variables aren't rendered correctly, because the escape sequence override also applies to how these variables are parsed.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/toolset/lib/python3.8/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/toolset/bin/ansible
python version = 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
```
### OS / Environment
For this I'm using `quay.io/ansible/toolset` docker image. (running in CI) but I was able to reproduce it on Ubuntu the same way.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#playbook.yml
---
- name: Reproduce issue
hosts: localhost
vars:
var_a: "value"
var_b: "{{ var_a }}"
var_c: "<< var_a >>"
gather_facts: false
tasks:
- set_fact:
var_d: "{{ var_a }}"
- template:
dest: ./rendered.txt
src: ./template.j2
```
Template:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
Here should be var_a: << var_a >>
Here should be var_b: << var_b >>
Here should be var_c: << var_c >>
Here should be var_d: << var_d >>
```
### Expected Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: value
Here should be var_c: << var_a >>
Here should be var_d: value
```
### Actual Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: {{ var_a }}
Here should be var_c: value
Here should be var_d: value
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75275
|
https://github.com/ansible/ansible/pull/75306
|
e5a2fe4c419740e9a709b07b064063f54277b983
|
767b2f07b00be12b9366655095cf24120d35092e
| 2021-07-16T14:35:07Z |
python
| 2021-07-26T14:38:41Z |
test/integration/targets/template/in_template_overrides.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,275 |
variable_start_string in template affects how jinja is parsing variables outside of template
|
### Summary
I'm rendering template with special header to change the variable/block start/end string:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
```
When rendering this template, some host variables aren't rendered correctly, because the escape sequence override also applies to how these variables are parsed.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/toolset/lib/python3.8/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/toolset/bin/ansible
python version = 3.8.10 (default, May 12 2021, 15:56:47) [GCC 8.3.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
```
### OS / Environment
For this I'm using `quay.io/ansible/toolset` docker image. (running in CI) but I was able to reproduce it on Ubuntu the same way.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#playbook.yml
---
- name: Reproduce issue
hosts: localhost
vars:
var_a: "value"
var_b: "{{ var_a }}"
var_c: "<< var_a >>"
gather_facts: false
tasks:
- set_fact:
var_d: "{{ var_a }}"
- template:
dest: ./rendered.txt
src: ./template.j2
```
Template:
```txt
#jinja2:variable_start_string:'<<' , variable_end_string:'>>', block_start_string:'<%', block_end_string:'%>'
Here should be var_a: << var_a >>
Here should be var_b: << var_b >>
Here should be var_c: << var_c >>
Here should be var_d: << var_d >>
```
### Expected Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: value
Here should be var_c: << var_a >>
Here should be var_d: value
```
### Actual Results
```console
❯ cat rendered.txt
Here should be var_a: value
Here should be var_b: {{ var_a }}
Here should be var_c: value
Here should be var_d: value
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75275
|
https://github.com/ansible/ansible/pull/75306
|
e5a2fe4c419740e9a709b07b064063f54277b983
|
767b2f07b00be12b9366655095cf24120d35092e
| 2021-07-16T14:35:07Z |
python
| 2021-07-26T14:38:41Z |
test/integration/targets/template/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_ROLES_PATH=../ ansible-playbook template.yml -i ../../inventory -v "$@"
# Test for #35571
ansible testhost -i testhost, -m debug -a 'msg={{ hostvars["localhost"] }}' -e "vars1={{ undef }}" -e "vars2={{ vars1 }}"
# Test for https://github.com/ansible/ansible/issues/27262
ansible-playbook ansible_managed.yml -c ansible_managed.cfg -i ../../inventory -v "$@"
# Test for #42585
ANSIBLE_ROLES_PATH=../ ansible-playbook custom_template.yml -i ../../inventory -v "$@"
# Test for several corner cases #57188
ansible-playbook corner_cases.yml -v "$@"
# Test for #57351
ansible-playbook filter_plugins.yml -v "$@"
# https://github.com/ansible/ansible/issues/68699
ansible-playbook unused_vars_include.yml -v "$@"
# https://github.com/ansible/ansible/issues/55152
ansible-playbook undefined_var_info.yml -v "$@"
# https://github.com/ansible/ansible/issues/72615
ansible-playbook 72615.yml -v "$@"
# https://github.com/ansible/ansible/issues/6653
ansible-playbook 6653.yml -v "$@"
# https://github.com/ansible/ansible/issues/72262
ansible-playbook 72262.yml -v "$@"
# ensure unsafe is preserved, even with extra newlines
ansible-playbook unsafe.yml -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,270 |
service_facts reports systemd-modules-load.service incorrectly on Centos7
|
### Summary
service_facts is not accurately reporting systemd-modules-load.service status if the service fails to load a module. The service will result in an `Active: failed` state on the machine but Ansible will report `status: static` state in service_facts.
### Issue Type
Bug Report
### Component Name
service_facts
### Ansible Version
```console
$ ansible --version
ansible 2.9.23
config file = None
configured module search path = ['/Users/person/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/person/.virtualenvs/msc_hpc/lib/python3.10/site-packages/ansible
executable location = /Users/person/.virtualenvs/msc_hpc/bin/ansible
python version = 3.10.0b1 (default, Jun 7 2021, 15:45:11) [Clang 12.0.0 (clang-1200.0.32.29)]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
- Create a phony module and reload the kernel modules:
```shell
cat 'bar' > /etc/modules-load.d/foo.conf
systemctl restart systemd-modules-load.service
```
- Create a simple playbook that gathers service_facts and ouputs the results
```yaml
- name: Gather service facts.
service_facts:
- name: Verify systemd-modules-load.service is in happy state.
debug:
msg: '{{ ansible_facts.services["systemd-modules-load.service"] }}'
```
### Expected Results
- Running the status of the service of the machine shows that it `failed` to run due to an invalid module configuration.
```shell
systemctl status systemd-modules-load.service
```
```shell
● systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-07-15 22:39:24 UTC; 26min ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 150775 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE)
Main PID: 150775 (code=exited, status=1/FAILURE)
```
### Actual Results
```console
- Ansible fails to detect the `failed` status of the service and outputs incorrect results.
TASK [test_msc_hpc : Verify systemd-modules-load.service is started.] **********
ok: [hpc-test] =>
msg:
name: systemd-modules-load.service
source: systemd
state: stopped
status: static
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75270
|
https://github.com/ansible/ansible/pull/75326
|
4dca539a29aab08ccf6af6b3e8870e5c69150488
|
82bab063e7c60b77596c5c87258d5c3398b5efc2
| 2021-07-15T23:41:14Z |
python
| 2021-07-29T15:22:41Z |
changelogs/fragments/service_facts_systemd_improve.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,270 |
service_facts reports systemd-modules-load.service incorrectly on Centos7
|
### Summary
service_facts is not accurately reporting systemd-modules-load.service status if the service fails to load a module. The service will result in an `Active: failed` state on the machine but Ansible will report `status: static` state in service_facts.
### Issue Type
Bug Report
### Component Name
service_facts
### Ansible Version
```console
$ ansible --version
ansible 2.9.23
config file = None
configured module search path = ['/Users/person/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/person/.virtualenvs/msc_hpc/lib/python3.10/site-packages/ansible
executable location = /Users/person/.virtualenvs/msc_hpc/bin/ansible
python version = 3.10.0b1 (default, Jun 7 2021, 15:45:11) [Clang 12.0.0 (clang-1200.0.32.29)]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
- Create a phony module and reload the kernel modules:
```shell
cat 'bar' > /etc/modules-load.d/foo.conf
systemctl restart systemd-modules-load.service
```
- Create a simple playbook that gathers service_facts and ouputs the results
```yaml
- name: Gather service facts.
service_facts:
- name: Verify systemd-modules-load.service is in happy state.
debug:
msg: '{{ ansible_facts.services["systemd-modules-load.service"] }}'
```
### Expected Results
- Running the status of the service of the machine shows that it `failed` to run due to an invalid module configuration.
```shell
systemctl status systemd-modules-load.service
```
```shell
● systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-07-15 22:39:24 UTC; 26min ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 150775 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE)
Main PID: 150775 (code=exited, status=1/FAILURE)
```
### Actual Results
```console
- Ansible fails to detect the `failed` status of the service and outputs incorrect results.
TASK [test_msc_hpc : Verify systemd-modules-load.service is started.] **********
ok: [hpc-test] =>
msg:
name: systemd-modules-load.service
source: systemd
state: stopped
status: static
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75270
|
https://github.com/ansible/ansible/pull/75326
|
4dca539a29aab08ccf6af6b3e8870e5c69150488
|
82bab063e7c60b77596c5c87258d5c3398b5efc2
| 2021-07-15T23:41:14Z |
python
| 2021-07-29T15:22:41Z |
lib/ansible/modules/service_facts.py
|
#!/usr/bin/python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# originally copied from AWX's scan_services module to bring this functionality
# into Core
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: service_facts
short_description: Return service state information as fact data
description:
- Return service state information as fact data for various service management utilities.
version_added: "2.5"
requirements: ["Any of the following supported init systems: systemd, sysv, upstart, AIX SRC"]
notes:
- When accessing the C(ansible_facts.services) facts collected by this module,
it is recommended to not use "dot notation" because services can have a C(-)
character in their name which would result in invalid "dot notation", such as
C(ansible_facts.services.zuul-gateway). It is instead recommended to
using the string value of the service name as the key in order to obtain
the fact data value like C(ansible_facts.services['zuul-gateway'])
- AIX SRC was added in version 2.11.
- Supports C(check_mode).
author:
- Adam Miller (@maxamillion)
'''
EXAMPLES = r'''
- name: Populate service facts
ansible.builtin.service_facts:
- name: Print service facts
ansible.builtin.debug:
var: ansible_facts.services
'''
RETURN = r'''
ansible_facts:
description: Facts to add to ansible_facts about the services on the system
returned: always
type: complex
contains:
services:
description: States of the services with service name as key.
returned: always
type: complex
contains:
source:
description:
- Init system of the service.
- One of C(rcctl), C(systemd), C(sysv), C(upstart), C(src).
returned: always
type: str
sample: sysv
state:
description:
- State of the service.
- 'This commonly includes (but is not limited to) the following: C(failed), C(running), C(stopped) or C(unknown).'
- Depending on the used init system additional states might be returned.
returned: always
type: str
sample: running
status:
description:
- State of the service.
- Either C(enabled), C(disabled), C(static), C(indirect) or C(unknown).
returned: systemd systems or RedHat/SUSE flavored sysvinit/upstart or OpenBSD
type: str
sample: enabled
name:
description: Name of the service.
returned: always
type: str
sample: arp-ethers.service
'''
import platform
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
class BaseService(object):
def __init__(self, module):
self.module = module
self.incomplete_warning = False
class ServiceScanService(BaseService):
def gather_services(self):
services = {}
service_path = self.module.get_bin_path("service")
if service_path is None:
return None
initctl_path = self.module.get_bin_path("initctl")
chkconfig_path = self.module.get_bin_path("chkconfig")
# sysvinit
if service_path is not None and chkconfig_path is None:
rc, stdout, stderr = self.module.run_command("%s --status-all 2>&1 | grep -E \"\\[ (\\+|\\-) \\]\"" % service_path, use_unsafe_shell=True)
for line in stdout.split("\n"):
line_data = line.split()
if len(line_data) < 4:
continue # Skipping because we expected more data
service_name = " ".join(line_data[3:])
if line_data[1] == "+":
service_state = "running"
else:
service_state = "stopped"
services[service_name] = {"name": service_name, "state": service_state, "source": "sysv"}
# Upstart
if initctl_path is not None and chkconfig_path is None:
p = re.compile(r'^\s?(?P<name>.*)\s(?P<goal>\w+)\/(?P<state>\w+)(\,\sprocess\s(?P<pid>[0-9]+))?\s*$')
rc, stdout, stderr = self.module.run_command("%s list" % initctl_path)
real_stdout = stdout.replace("\r", "")
for line in real_stdout.split("\n"):
m = p.match(line)
if not m:
continue
service_name = m.group('name')
service_goal = m.group('goal')
service_state = m.group('state')
if m.group('pid'):
pid = m.group('pid')
else:
pid = None # NOQA
payload = {"name": service_name, "state": service_state, "goal": service_goal, "source": "upstart"}
services[service_name] = payload
# RH sysvinit
elif chkconfig_path is not None:
# print '%s --status-all | grep -E "is (running|stopped)"' % service_path
p = re.compile(
r'(?P<service>.*?)\s+[0-9]:(?P<rl0>on|off)\s+[0-9]:(?P<rl1>on|off)\s+[0-9]:(?P<rl2>on|off)\s+'
r'[0-9]:(?P<rl3>on|off)\s+[0-9]:(?P<rl4>on|off)\s+[0-9]:(?P<rl5>on|off)\s+[0-9]:(?P<rl6>on|off)')
rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True)
# Check for special cases where stdout does not fit pattern
match_any = False
for line in stdout.split('\n'):
if p.match(line):
match_any = True
if not match_any:
p_simple = re.compile(r'(?P<service>.*?)\s+(?P<rl0>on|off)')
match_any = False
for line in stdout.split('\n'):
if p_simple.match(line):
match_any = True
if match_any:
# Try extra flags " -l --allservices" needed for SLES11
rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True)
elif '--list' in stderr:
# Extra flag needed for RHEL5
rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True)
for line in stdout.split('\n'):
m = p.match(line)
if m:
service_name = m.group('service')
service_state = 'stopped'
service_status = "disabled"
if m.group('rl3') == 'on':
service_status = "enabled"
rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True)
service_state = rc
if rc in (0,):
service_state = 'running'
# elif rc in (1,3):
else:
if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower():
self.incomplete_warning = True
continue
else:
service_state = 'stopped'
service_data = {"name": service_name, "state": service_state, "status": service_status, "source": "sysv"}
services[service_name] = service_data
return services
class SystemctlScanService(BaseService):
def systemd_enabled(self):
# Check if init is the systemd command, using comm as cmdline could be symlink
try:
f = open('/proc/1/comm', 'r')
except IOError:
# If comm doesn't exist, old kernel, no systemd
return False
for line in f:
if 'systemd' in line:
return True
return False
def gather_services(self):
services = {}
if not self.systemd_enabled():
return None
systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"])
if systemctl_path is None:
return None
rc, stdout, stderr = self.module.run_command("%s list-units --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True)
for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line and 'not-found' not in svc_line]:
service_name = line.split()[0]
if "running" in line:
state_val = "running"
else:
if 'failed' in line:
service_name = line.split()[1]
state_val = "stopped"
services[service_name] = {"name": service_name, "state": state_val, "status": "unknown", "source": "systemd"}
rc, stdout, stderr = self.module.run_command("%s list-unit-files --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True)
for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line and 'not-found' not in svc_line]:
# there is one more column (VENDOR PRESET) from `systemctl list-unit-files` for systemd >= 245
try:
service_name, status_val = line.split()[:2]
except IndexError:
self.module.fail_json(msg="Malformed output discovered from systemd list-unit-files: {0}".format(line))
if service_name not in services:
rc, stdout, stderr = self.module.run_command("%s show %s --property=ActiveState" % (systemctl_path, service_name), use_unsafe_shell=True)
state = 'unknown'
if not rc and stdout != '':
state = stdout.replace('ActiveState=', '').rstrip()
services[service_name] = {"name": service_name, "state": state, "status": status_val, "source": "systemd"}
else:
services[service_name]["status"] = status_val
return services
class AIXScanService(BaseService):
def gather_services(self):
services = {}
if platform.system() != 'AIX':
return None
lssrc_path = self.module.get_bin_path("lssrc")
if lssrc_path is None:
return None
rc, stdout, stderr = self.module.run_command("%s -a" % lssrc_path)
for line in stdout.split('\n'):
line_data = line.split()
if len(line_data) < 2:
continue # Skipping because we expected more data
if line_data[0] == "Subsystem":
continue # Skip header
service_name = line_data[0]
if line_data[-1] == "active":
service_state = "running"
elif line_data[-1] == "inoperative":
service_state = "stopped"
else:
service_state = "unknown"
services[service_name] = {"name": service_name, "state": service_state, "source": "src"}
return services
class OpenBSDScanService(BaseService):
def query_rcctl(self, cmd):
svcs = []
rc, stdout, stderr = self.module.run_command("%s ls %s" % (self.rcctl_path, cmd))
if 'needs root privileges' in stderr.lower():
self.incomplete_warning = True
return []
for svc in stdout.split('\n'):
if svc == '':
continue
else:
svcs.append(svc)
return svcs
def gather_services(self):
services = {}
self.rcctl_path = self.module.get_bin_path("rcctl")
if self.rcctl_path is None:
return None
for svc in self.query_rcctl('all'):
services[svc] = {'name': svc, 'source': 'rcctl'}
for svc in self.query_rcctl('on'):
services[svc].update({'status': 'enabled'})
for svc in self.query_rcctl('started'):
services[svc].update({'state': 'running'})
# Based on the list of services that are enabled, determine which are disabled
[services[svc].update({'status': 'disabled'}) for svc in services if services[svc].get('status') is None]
# and do the same for those are aren't running
[services[svc].update({'state': 'stopped'}) for svc in services if services[svc].get('state') is None]
# Override the state for services which are marked as 'failed'
for svc in self.query_rcctl('failed'):
services[svc].update({'state': 'failed'})
return services
def main():
module = AnsibleModule(argument_spec=dict(), supports_check_mode=True)
locale = get_best_parsable_locale(module)
module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale)
service_modules = (ServiceScanService, SystemctlScanService, AIXScanService, OpenBSDScanService)
all_services = {}
incomplete_warning = False
for svc_module in service_modules:
svcmod = svc_module(module)
svc = svcmod.gather_services()
if svc is not None:
all_services.update(svc)
if svcmod.incomplete_warning:
incomplete_warning = True
if len(all_services) == 0:
results = dict(skipped=True, msg="Failed to find any services. Sometimes this is due to insufficient privileges.")
else:
results = dict(ansible_facts=dict(services=all_services))
if incomplete_warning:
results['msg'] = "WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges."
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
changelogs/fragments/75244-fix-templated-handler-names.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/58841.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/roles/import_template_handler_names/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/roles/template_handler_names/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/roles/template_handler_names/tasks/evaluation_time.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/roles/template_handler_names/tasks/lazy_evaluation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,841 |
var is suddenly not defined between tasks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When running a task var is defined in shell task, undefined in notify
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
notify, loop, var, include_role
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible_modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=240s
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/var/lib/awx/projects/_394__ansiblerepo/roles', u'/var/lib/awx/projects/_414__t02058_ivr_billing/roles']
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = True
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVALID_TASK_ATTRIBUTE_FAILED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
##Playbook
- hosts: mygroup
tasks:
- debug:
msg: Starting all Apps
- name: Load apps Variables
include_vars: AppsList.yml
tags:
- start
- bounce
- include_role:
name: "{{ web.appname }}"
apply:
ignore_errors: yes
vars:
appname: "{{ web.appname }}"
loop: "{{ apps }}"
loop_control:
loop_var: web
when: apps is defined
ignore_errors: yes
tags:
- start
- bounce
## AppsList.yml
apps:
- { appname: App1}
- { appname: App2 }
##Role included in loop
- name: get pid
shell: "ps -ef | grep -w {{ appname }} "
register: ServicePid
tags:
- start
##Task is fine
- name: Register Bounce variable
debug:
msg: This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency
verbosity: 5
register: Bounce
ignore_errors: "{{ ansible_check_mode }}"
tags:
- start
- name: Notify start - if nothing changed or we want to bounce and still not running
debug:
msg: Not Running - Starting App
changed_when: true
notify: "restart {{ appname }}"
when: Bounce is not defined or ServicePid.stdout == ""
tags:
- bounce
- start
##task fails - ERROR! 'appname' is undefined
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that 'appname' will be defined inside the loop for all tasks.
this worked in Ansible 2.7
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```ok: [host] => {
"changed": false,
"cmd": "ps -ef | grep -w App1|
"delta": "0:00:00.115077",
"end": "2019-07-08 16:34:32.173968",
"invocation": {
"module_args": {
"_raw_params": "ps -ef | grep -w App1
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-…
TASK [webservice/SoapServiceStatus : Register Bounce variable] *****************
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:10
ok: [host
] => {
"msg": "This task paired with the next triggers a bounce with bounce tag, but skips the bounce without it, preserving idempotency"
}
TASK [webservice/SoapServiceStatus : Notify start - if nothing changed or we want to bounce and still not running] ***
task path: /var/lib/awx/projects/_394__ansiblerepo/playbooks/roles/webservice/SoapServiceStatus/tasks/main.yml:19
ERROR! 'appname' is undefined
```
[Role.txt](https://github.com/ansible/ansible/files/3370349/Role.txt)
[AppsList.txt](https://github.com/ansible/ansible/files/3370350/AppsList.txt)
[play.txt](https://github.com/ansible/ansible/files/3370351/play.txt)
|
https://github.com/ansible/ansible/issues/58841
|
https://github.com/ansible/ansible/pull/75244
|
d8dcfe737a841c2075581c13f1f2a5f20397d3ea
|
c8d413164d2a7f76376792bb0028000909ac68b7
| 2019-07-08T21:58:27Z |
python
| 2021-08-03T19:16:19Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,371 |
Unable to render a template containing lookup inside an imported macro
|
### Summary
I am trying to isolate a lookup call with multiple parameters (hashi_vault, for example) into a Jinja2 macro in a sepate file to be used later in multiple high-level templates. Template containing a call to this kind of imported macro is not rendered, instead an error `argument of type 'NoneType' is not iterable` is thrown.
As long as I move the macro from a separate file to the high-level template itself, the template is rendered correctly. If the macro is not called, the template is also rendered correctly. If macro does not contain a lookup call, the template is also rendered correctly.
Issue seems not to be dependent on a specific lookup.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible 2.9.16
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 13 2020, 02:51:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = [u'/home/user/aes_infra/debug/hosts']
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
RHEL 7.9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
playbook.yml:
```yaml (paste below)
- hosts: localhost
tasks:
- template:
src: template_with_included_macro.j2
dest: /tmp/something
```
template_with_included_macro.j2:
```yaml (paste below)
{% from 'some_macro.j2' import some_fn as some_fn %}
{{ some_fn('some parameter value') }}
```
some_macro.j2:
```yaml (paste below)
{% macro some_fn(some_value) -%}
{{ some_value }}: {{ lookup('lines', 'date') }}
{%- endmacro %}
```
### Expected Results
/tmp/something is created containing a line similar to:
`some parameter value: Fri Jul 30 19:52:01 MSK 2021`
### Actual Results
```console
ansible-playbook playbook.yml -vvvv
ansible-playbook 2.9.16
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Aug 13 2020, 02:51:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/user/aes_infra/debug/hosts as it did not pass its verify_file() method
script declined parsing /home/user/aes_infra/debug/hosts as it did not pass its verify_file() method
auto declined parsing /home/user/aes_infra/debug/hosts as it did not pass its verify_file() method
Parsed /home/user/aes_infra/debug/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc
Skipping callback 'actionable', as we already have a stdout callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Skipping callback 'debug', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Skipping callback 'json', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'null', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Skipping callback 'selective', as we already have a stdout callback.
Skipping callback 'skippy', as we already have a stdout callback.
Skipping callback 'stderr', as we already have a stdout callback.
Skipping callback 'unixy', as we already have a stdout callback.
Skipping callback 'yaml', as we already have a stdout callback.
PLAYBOOK: playbook.yml ***************************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
become_method: sudo
inventory: (u'/home/user/aes_infra/debug/hosts',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in playbook.yml
PLAY [localhost] *********************************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/aes_infra/debug/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796 `" && echo ansible-tmp-1627663950.49-10712-11567199494796="` echo /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-10703AtxaSw/tmp6TEQtq TO /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796/ /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1627663950.49-10712-11567199494796/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [template] **********************************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/aes_infra/debug/playbook.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1627663951.84-10780-154162593862092 `" && echo ansible-tmp-1627663951.84-10780-154162593862092="` echo /home/user/.ansible/tmp/ansible-tmp-1627663951.84-10780-154162593862092 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1627663951.84-10780-154162593862092/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"msg": "AnsibleError: Unexpected templating type error occurred on ({% from 'some_macro.j2' import some_fn as some_fn %}\r\n{{ some_fn('some parameter value') }}\r\n): argument of type 'NoneType' is not iterable"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75371
|
https://github.com/ansible/ansible/pull/75384
|
1b95b1e7a43ac7c779ba730649922af917b387db
|
5a3807656860cc52c7e6b3eebc5c9397585525ba
| 2021-07-30T17:13:02Z |
python
| 2021-08-05T07:59:54Z |
changelogs/fragments/75371-import_template_globals.yml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.