status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
changelogs/fragments/73456-let-vault-lookup-output-string.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
lib/ansible/plugins/lookup/unvault.py
|
# (c) 2020 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: unvault
author: Ansible Core Team
version_added: "2.10"
short_description: read vaulted file(s) contents
description:
- This lookup returns the contents from vaulted (or not) file(s) on the Ansible controller's file system.
options:
_terms:
description: path(s) of files to read
required: True
notes:
- This lookup does not understand 'globbing' nor shell environment variables.
"""
EXAMPLES = """
- debug: msg="the value of foo.txt is {{lookup('unvault', '/etc/foo.txt')|to_string }}"
"""
RETURN = """
_raw:
description:
- content of file(s) as bytes
type: list
elements: raw
"""
from ansible.errors import AnsibleParserError
from ansible.plugins.lookup import LookupBase
from ansible.module_utils._text import to_text
from ansible.utils.display import Display
display = Display()
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
self.set_options(direct=kwargs)
ret = []
for term in terms:
display.debug("Unvault lookup term: %s" % term)
# Find the file in the expected search path
lookupfile = self.find_file_in_search_path(variables, 'files', term)
display.vvvv(u"Unvault lookup found %s" % lookupfile)
if lookupfile:
actual_file = self._loader.get_real_file(lookupfile, decrypt=True)
with open(actual_file, 'rb') as f:
b_contents = f.read()
ret.append(b_contents)
else:
raise AnsibleParserError('Unable to find file matching "%s" ' % term)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
test/integration/targets/unvault/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
test/integration/targets/unvault/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
test/integration/targets/unvault/password
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
test/integration/targets/unvault/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,456 |
lookup("unvault") | from_yaml fails
|
##### SUMMARY
`unvault` lookup plugin dumps a Python representation instead of the actual raw data leading to an unparsable and hardly usable string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unvault
##### ANSIBLE VERSION
```
ansible 2.10.5
config file = ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/.local/lib/python3.8/site-packages/ansible
executable location = ~/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(~/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(~/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(~/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=15m -o ForwardAgent=yes
CACHE_PLUGIN(~/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(~/ansible/ansible.cfg) = ./.facts.d
CACHE_PLUGIN_TIMEOUT(~/ansible/ansible.cfg) = 3600
DEFAULT_FILTER_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/filter_plugins']
DEFAULT_HOST_LIST(~/ansible/ansible.cfg) = ['~/ansible/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(~/ansible/ansible.cfg) = ['~/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(~/ansible/ansible.cfg) = ['~/ansible', '~/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(~/ansible/ansible.cfg) = ['prod@./vault-client.sh', 'preprod@./vault-client.sh']
RETRY_FILES_ENABLED(~/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu
##### STEPS TO REPRODUCE
- Create a vault containing exactly `foo: bar`. Eg using `ansible-vault create /tmp/vault.yml`
- `ansible -m debug -a "msg={{ lookup('unvault', '/tmp/vault.yml') | from_yaml }}" localhost`
##### EXPECTED RESULTS
```
localhost | SUCCESS => {
"msg": {
"foo": "bar"
}
}
```
(Same as `ansible -m debug -a "msg={{ 'foo: bar' | from_yaml }}" localhost`)
##### ACTUAL RESULTS
```
localhost | SUCCESS => {
"msg": "b'foo: bar\\n'"
}
```
(As if `from_yaml` would have no effect)
|
https://github.com/ansible/ansible/issues/73456
|
https://github.com/ansible/ansible/pull/73571
|
bcefb6b5f1e5b502e4368f74637d18036f0a2477
|
d0fda3e9011ece1be85e08835550f5063823f087
| 2021-02-02T22:58:23Z |
python
| 2021-02-11T19:27:47Z |
test/integration/targets/unvault/vault
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,071 |
ansible-galaxy collection install sometimes does not find all versions
|
##### SUMMARY
See https://app.shippable.com/github/ansible-collections/community.general/runs/6193/46/console for an example, I've already seen this multiple times today (also on AZP).
`ansible-galaxy -vvv collection install ansible.posix` was run three times in a row, and ~~always~~ two times out of three failed with `This collection only contains pre-releases.` (the first fail was a read timeout):
```
+ ansible-galaxy -vvv collection install ansible.posix
01:54 [DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
01:54 controller starting with Ansible 2.12. Current version: 2.7.15+ (default, Feb
01:54 9 2019, 11:33:22) [GCC 5.4.0 20160609]. This feature will be removed from
01:54 ansible-core in version 2.12. Deprecation warnings can be disabled by setting
01:54 deprecation_warnings=False in ansible.cfg.
01:54 /root/venv/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
01:54 from cryptography.exceptions import InvalidSignature
01:55 [WARNING]: You are running the development version of Ansible. You should only
01:55 run Ansible from "devel" if you are modifying the Ansible engine, or trying out
01:55 features under development. This is a rapidly changing source of code and can
01:55 become unstable at any point.
01:55 [DEPRECATION WARNING]: Setting verbosity before the arg sub command is
01:55 deprecated, set the verbosity after the sub command. This feature will be
01:55 removed from ansible-core in version 2.13. Deprecation warnings can be disabled
01:55 by setting deprecation_warnings=False in ansible.cfg.
01:55 ansible-galaxy 2.11.0.dev0
01:55 config file = None
01:55 configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
01:55 ansible python module location = /root/venv/lib/python2.7/site-packages/ansible
01:55 ansible collection location = /root/.ansible
01:55 executable location = /root/venv/bin/ansible-galaxy
01:55 python version = 2.7.15+ (default, Feb 9 2019, 11:33:22) [GCC 5.4.0 20160609]
01:55 jinja version = 2.11.2
01:55 libyaml = True
01:55 No config file found; using defaults
01:55 Starting galaxy collection install process
01:55 Found installed collection ansible.netcommon:1.4.1 at '/root/.ansible/ansible_collections/ansible/netcommon'
01:55 Found installed collection community.kubernetes:1.1.1 at '/root/.ansible/ansible_collections/community/kubernetes'
01:55 Found installed collection community.internal_test_tools:0.2.1 at '/root/.ansible/ansible_collections/community/internal_test_tools'
01:55 Skipping '/root/.ansible/ansible_collections/community/general/.git' for collection build
01:55 Skipping '/root/.ansible/ansible_collections/community/general/galaxy.yml' for collection build
01:55 Found installed collection community.general:2.0.0 at '/root/.ansible/ansible_collections/community/general'
01:55 Found installed collection google.cloud:1.0.1 at '/root/.ansible/ansible_collections/google/cloud'
01:55 Process install dependency map
01:55 Processing requirement collection 'ansible.posix'
01:55 Opened /root/.ansible/galaxy_token
01:56 Collection 'ansible.posix' obtained from server default https://galaxy.ansible.com/api/
01:56 ERROR! Cannot meet requirement * for dependency ansible.posix from source 'https://galaxy.ansible.com/api/'. Available versions before last requirement added:
01:56 Requirements from:
01:56 base - 'ansible.posix:*'
01:56 This collection only contains pre-releases. Utilize `--pre` to install pre-releases, or explicitly provide the pre-release version.
```
It could be that for some reason, only the first page of the results is queried (https://galaxy.ansible.com/api/v2/collections/ansible/posix/versions/), which contains only pre-releases. I guess this is probably a problem on Galaxy's side, but in case it is not, I'm creating this issue in ansible/ansible.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/73071
|
https://github.com/ansible/ansible/pull/73557
|
29aef842d77b24105ce356d4b313be2269d466d6
|
00bd0b893d5d21de040b53032c466707bacb3b93
| 2020-12-28T10:01:37Z |
python
| 2021-02-15T14:45:01Z |
changelogs/fragments/73557-ansible-galaxy-cache-paginated-response.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,071 |
ansible-galaxy collection install sometimes does not find all versions
|
##### SUMMARY
See https://app.shippable.com/github/ansible-collections/community.general/runs/6193/46/console for an example, I've already seen this multiple times today (also on AZP).
`ansible-galaxy -vvv collection install ansible.posix` was run three times in a row, and ~~always~~ two times out of three failed with `This collection only contains pre-releases.` (the first fail was a read timeout):
```
+ ansible-galaxy -vvv collection install ansible.posix
01:54 [DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
01:54 controller starting with Ansible 2.12. Current version: 2.7.15+ (default, Feb
01:54 9 2019, 11:33:22) [GCC 5.4.0 20160609]. This feature will be removed from
01:54 ansible-core in version 2.12. Deprecation warnings can be disabled by setting
01:54 deprecation_warnings=False in ansible.cfg.
01:54 /root/venv/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
01:54 from cryptography.exceptions import InvalidSignature
01:55 [WARNING]: You are running the development version of Ansible. You should only
01:55 run Ansible from "devel" if you are modifying the Ansible engine, or trying out
01:55 features under development. This is a rapidly changing source of code and can
01:55 become unstable at any point.
01:55 [DEPRECATION WARNING]: Setting verbosity before the arg sub command is
01:55 deprecated, set the verbosity after the sub command. This feature will be
01:55 removed from ansible-core in version 2.13. Deprecation warnings can be disabled
01:55 by setting deprecation_warnings=False in ansible.cfg.
01:55 ansible-galaxy 2.11.0.dev0
01:55 config file = None
01:55 configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
01:55 ansible python module location = /root/venv/lib/python2.7/site-packages/ansible
01:55 ansible collection location = /root/.ansible
01:55 executable location = /root/venv/bin/ansible-galaxy
01:55 python version = 2.7.15+ (default, Feb 9 2019, 11:33:22) [GCC 5.4.0 20160609]
01:55 jinja version = 2.11.2
01:55 libyaml = True
01:55 No config file found; using defaults
01:55 Starting galaxy collection install process
01:55 Found installed collection ansible.netcommon:1.4.1 at '/root/.ansible/ansible_collections/ansible/netcommon'
01:55 Found installed collection community.kubernetes:1.1.1 at '/root/.ansible/ansible_collections/community/kubernetes'
01:55 Found installed collection community.internal_test_tools:0.2.1 at '/root/.ansible/ansible_collections/community/internal_test_tools'
01:55 Skipping '/root/.ansible/ansible_collections/community/general/.git' for collection build
01:55 Skipping '/root/.ansible/ansible_collections/community/general/galaxy.yml' for collection build
01:55 Found installed collection community.general:2.0.0 at '/root/.ansible/ansible_collections/community/general'
01:55 Found installed collection google.cloud:1.0.1 at '/root/.ansible/ansible_collections/google/cloud'
01:55 Process install dependency map
01:55 Processing requirement collection 'ansible.posix'
01:55 Opened /root/.ansible/galaxy_token
01:56 Collection 'ansible.posix' obtained from server default https://galaxy.ansible.com/api/
01:56 ERROR! Cannot meet requirement * for dependency ansible.posix from source 'https://galaxy.ansible.com/api/'. Available versions before last requirement added:
01:56 Requirements from:
01:56 base - 'ansible.posix:*'
01:56 This collection only contains pre-releases. Utilize `--pre` to install pre-releases, or explicitly provide the pre-release version.
```
It could be that for some reason, only the first page of the results is queried (https://galaxy.ansible.com/api/v2/collections/ansible/posix/versions/), which contains only pre-releases. I guess this is probably a problem on Galaxy's side, but in case it is not, I'm creating this issue in ansible/ansible.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/73071
|
https://github.com/ansible/ansible/pull/73557
|
29aef842d77b24105ce356d4b313be2269d466d6
|
00bd0b893d5d21de040b53032c466707bacb3b93
| 2020-12-28T10:01:37Z |
python
| 2021-02-15T14:45:01Z |
lib/ansible/galaxy/api.py
|
# (C) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import datetime
import functools
import hashlib
import json
import os
import stat
import tarfile
import time
import threading
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlencode, urlparse
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url, prepare_multipart
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
from ansible.utils.path import makedirs_safe
try:
from urllib.parse import urlparse
except ImportError:
# Python 2
from urlparse import urlparse
display = Display()
_CACHE_LOCK = threading.Lock()
def cache_lock(func):
def wrapped(*args, **kwargs):
with _CACHE_LOCK:
return func(*args, **kwargs)
return wrapped
def g_connect(versions):
"""
Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the
endpoint.
:param versions: A list of API versions that the function supports.
"""
def decorator(method):
def wrapped(self, *args, **kwargs):
if not self._available_api_versions:
display.vvvv("Initial connection to galaxy_server: %s" % self.api_server)
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
n_url = self.api_server
error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url)
if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/':
n_url = 'https://galaxy.ansible.com/api/'
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except (AnsibleError, GalaxyError, ValueError, KeyError) as err:
# Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
if n_url.endswith('/api') or n_url.endswith('/api/'):
raise
# Let exceptions here bubble up but raise the original if this returns a 404 (/api/ wasn't found).
n_url = _urljoin(n_url, '/api/')
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except GalaxyError as new_err:
if new_err.http_code == 404:
raise err
raise
if 'available_versions' not in data:
raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available "
"on %s" % (n_url, self.api_server))
# Update api_server to point to the "real" API root, which in this case could have been the configured
# url + '/api/' appended.
self.api_server = n_url
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
available_versions = data.get('available_versions', {u'v1': u'v1/'})
if list(available_versions.keys()) == [u'v1']:
available_versions[u'v2'] = u'v2/'
self._available_api_versions = available_versions
display.vvvv("Found API version '%s' with Galaxy server %s (%s)"
% (', '.join(available_versions.keys()), self.name, self.api_server))
# Verify that the API versions the function works with are available on the server specified.
available_versions = set(self._available_api_versions.keys())
common_versions = set(versions).intersection(available_versions)
if not common_versions:
raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s"
% (method.__name__, ", ".join(versions), ", ".join(available_versions),
self.name, self.api_server))
return method(self, *args, **kwargs)
return wrapped
return decorator
def get_cache_id(url):
""" Gets the cache ID for the URL specified. """
url_info = urlparse(url)
port = None
try:
port = url_info.port
except ValueError:
pass # While the URL is probably invalid, let the caller figure that out when using it
# Cannot use netloc because it could contain credentials if the server specified had them in there.
return '%s:%s' % (url_info.hostname, port or '')
@cache_lock
def _load_cache(b_cache_path):
""" Loads the cache file requested if possible. The file must not be world writable. """
cache_version = 1
if not os.path.isfile(b_cache_path):
display.vvvv("Creating Galaxy API response cache file at '%s'" % to_text(b_cache_path))
with open(b_cache_path, 'w'):
os.chmod(b_cache_path, 0o600)
cache_mode = os.stat(b_cache_path).st_mode
if cache_mode & stat.S_IWOTH:
display.warning("Galaxy cache has world writable access (%s), ignoring it as a cache source."
% to_text(b_cache_path))
return
with open(b_cache_path, mode='rb') as fd:
json_val = to_text(fd.read(), errors='surrogate_or_strict')
try:
cache = json.loads(json_val)
except ValueError:
cache = None
if not isinstance(cache, dict) or cache.get('version', None) != cache_version:
display.vvvv("Galaxy cache file at '%s' has an invalid version, clearing" % to_text(b_cache_path))
cache = {'version': cache_version}
# Set the cache after we've cleared the existing entries
with open(b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(cache), errors='surrogate_or_strict'))
return cache
def _urljoin(*args):
return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a)
class GalaxyError(AnsibleError):
""" Error for bad Galaxy server responses. """
def __init__(self, http_error, message):
super(GalaxyError, self).__init__(message)
self.http_code = http_error.code
self.url = http_error.geturl()
try:
http_msg = to_text(http_error.read())
err_info = json.loads(http_msg)
except (AttributeError, ValueError):
err_info = {}
url_split = self.url.split('/')
if 'v2' in url_split:
galaxy_msg = err_info.get('message', http_error.reason)
code = err_info.get('code', 'Unknown')
full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code)
elif 'v3' in url_split:
errors = err_info.get('errors', [])
if not errors:
errors = [{}] # Defaults are set below, we just need to make sure 1 error is present.
message_lines = []
for error in errors:
error_msg = error.get('detail') or error.get('title') or http_error.reason
error_code = error.get('code') or 'Unknown'
message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code)
message_lines.append(message_line)
full_error_msg = "%s %s" % (message, ', '.join(message_lines))
else:
# v1 and unknown API endpoints
galaxy_msg = err_info.get('default', http_error.reason)
full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg)
self.message = to_native(full_error_msg)
# Keep the raw string results for the date. It's too complex to parse as a datetime object and the various APIs return
# them in different formats.
CollectionMetadata = collections.namedtuple('CollectionMetadata', ['namespace', 'name', 'created_str', 'modified_str'])
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
:param namespace: The namespace name.
:param name: The collection name.
:param version: The version that the metadata refers to.
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
"""
self.namespace = namespace
self.name = name
self.version = version
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
@functools.total_ordering
class GalaxyAPI:
""" This class is meant to be used as a API client for an Ansible Galaxy server """
def __init__(
self, galaxy, name, url,
username=None, password=None, token=None, validate_certs=True,
available_api_versions=None,
clear_response_cache=False, no_cache=True,
priority=float('inf'),
):
self.galaxy = galaxy
self.name = name
self.username = username
self.password = password
self.token = token
self.api_server = url
self.validate_certs = validate_certs
self._available_api_versions = available_api_versions or {}
self._priority = priority
b_cache_dir = to_bytes(C.config.get_config_value('GALAXY_CACHE_DIR'), errors='surrogate_or_strict')
makedirs_safe(b_cache_dir, mode=0o700)
self._b_cache_path = os.path.join(b_cache_dir, b'api.json')
if clear_response_cache:
with _CACHE_LOCK:
if os.path.exists(self._b_cache_path):
display.vvvv("Clearing cache file (%s)" % to_text(self._b_cache_path))
os.remove(self._b_cache_path)
self._cache = None
if not no_cache:
self._cache = _load_cache(self._b_cache_path)
display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs))
def __str__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as a native string representation."""
return to_native(self.name)
def __unicode__(self):
# type: (GalaxyAPI) -> unicode
"""Render GalaxyAPI as a unicode/text string representation."""
return to_text(self.name)
def __repr__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as an inspectable string representation."""
return (
'<{instance!s} "{name!s}" @ {url!s} with priority {priority!s}>'.
format(
instance=self, name=self.name,
priority=self._priority, url=self.api_server,
)
)
def __lt__(self, other_galaxy_api):
# type: (GalaxyAPI, GalaxyAPI) -> Union[bool, 'NotImplemented']
"""Return whether the instance priority is higher than other."""
if not isinstance(other_galaxy_api, self.__class__):
return NotImplemented
return (
self._priority > other_galaxy_api._priority or
self.name < self.name
)
@property
@g_connect(['v1', 'v2', 'v3'])
def available_api_versions(self):
# Calling g_connect will populate self._available_api_versions
return self._available_api_versions
def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None,
cache=False):
url_info = urlparse(url)
cache_id = get_cache_id(url)
if cache and self._cache:
server_cache = self._cache.setdefault(cache_id, {})
iso_datetime_format = '%Y-%m-%dT%H:%M:%SZ'
valid = False
if url_info.path in server_cache:
expires = datetime.datetime.strptime(server_cache[url_info.path]['expires'], iso_datetime_format)
valid = datetime.datetime.utcnow() < expires
if valid and not url_info.query:
# Got a hit on the cache and we aren't getting a paginated response
path_cache = server_cache[url_info.path]
if path_cache.get('paginated'):
if '/v3/' in url_info.path:
res = {'links': {'next': None}}
else:
res = {'next': None}
# Technically some v3 paginated APIs return in 'data' but the caller checks the keys for this so
# always returning the cache under results is fine.
res['results'] = []
for result in path_cache['results']:
res['results'].append(result)
else:
res = path_cache['results']
return res
elif not url_info.query:
# The cache entry had expired or does not exist, start a new blank entry to be filled later.
expires = datetime.datetime.utcnow()
expires += datetime.timedelta(days=1)
server_cache[url_info.path] = {
'expires': expires.strftime(iso_datetime_format),
'paginated': False,
}
headers = headers or {}
self._add_auth_token(headers, url, required=auth_required)
try:
display.vvvv("Calling Galaxy at %s" % url)
resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers,
method=method, timeout=20, http_agent=user_agent(), follow_redirects='safe')
except HTTPError as e:
raise GalaxyError(e, error_context_msg)
except Exception as e:
raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e)))
resp_data = to_text(resp.read(), errors='surrogate_or_strict')
try:
data = json.loads(resp_data)
except ValueError:
raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s"
% (resp.url, to_native(resp_data)))
if cache and self._cache:
path_cache = self._cache[cache_id][url_info.path]
# v3 can return data or results for paginated results. Scan the result so we can determine what to cache.
paginated_key = None
for key in ['data', 'results']:
if key in data:
paginated_key = key
break
if paginated_key:
path_cache['paginated'] = True
results = path_cache.setdefault('results', [])
for result in data[paginated_key]:
results.append(result)
else:
path_cache['results'] = data
self._set_cache()
return data
def _add_auth_token(self, headers, url, token_type=None, required=False):
# Don't add the auth token if one is already present
if 'Authorization' in headers:
return
if not self.token and required:
raise AnsibleError("No access token or username set. A token can be set with --api-key "
"or at {0}.".format(to_native(C.GALAXY_TOKEN_PATH)))
if self.token:
headers.update(self.token.headers())
@cache_lock
def _set_cache(self):
with open(self._b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(self._cache), errors='surrogate_or_strict'))
@g_connect(['v1'])
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/'
args = urlencode({"github_token": github_token})
resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST", http_agent=user_agent())
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
return data
@g_connect(['v1'])
def create_import_task(self, github_user, github_repo, reference=None, role_name=None):
"""
Post an import request
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/'
args = {
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
}
if role_name:
args['alternate_role_name'] = role_name
elif github_repo.startswith('ansible-role'):
args['alternate_role_name'] = github_repo[len('ansible-role') + 1:]
data = self._call_galaxy(url, args=urlencode(args), method="POST")
if data.get('results', None):
return data['results']
return data
@g_connect(['v1'])
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports")
if task_id is not None:
url = "%s?id=%d" % (url, task_id)
elif github_user is not None and github_repo is not None:
url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self._call_galaxy(url)
return data['results']
@g_connect(['v1'])
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name.
"""
role_name = to_text(urlquote(to_bytes(role_name)))
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except Exception:
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles",
"?owner__username=%s&name=%s" % (user_name, role_name))
data = self._call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
@g_connect(['v1'])
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
results = []
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related,
"?page_size=50")
data = self._call_galaxy(url)
results = data['results']
done = (data.get('next_link', None) is None)
# https://github.com/ansible/ansible/issues/64355
# api_server contains part of the API path but next_link includes the /api part so strip it out.
url_info = urlparse(self.api_server)
base_url = "%s://%s/" % (url_info.scheme, url_info.netloc)
while not done:
url = _urljoin(base_url, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
except Exception as e:
display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s"
% (role_id, related, to_text(e)))
return results
@g_connect(['v1'])
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size")
data = self._call_galaxy(url)
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error)))
@g_connect(['v1'])
def search_roles(self, search, **kwargs):
search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?")
if search:
search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search)))
tags = kwargs.get('tags', None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, string_types):
tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self._call_galaxy(search_url)
return data
@g_connect(['v1'])
def add_secret(self, source, github_user, github_repo, secret):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/'
args = urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self._call_galaxy(url, args=args, method="POST")
return data
@g_connect(['v1'])
def list_secrets(self):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets")
data = self._call_galaxy(url, auth_required=True)
return data
@g_connect(['v1'])
def remove_secret(self, secret_id):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/'
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
@g_connect(['v1'])
def delete_role(self, github_user, github_repo):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole",
"?github_user=%s&github_repo=%s" % (github_user, github_repo))
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
# Collection APIs #
@g_connect(['v2', 'v3'])
def publish_collection(self, collection_path):
"""
Publishes a collection to a Galaxy server and returns the import task URI.
:param collection_path: The path to the collection tarball to publish.
:return: The import task URI that contains the import results.
"""
display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server))
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path))
elif not tarfile.is_tarfile(b_collection_path):
raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection "
"build' to create a proper release artifact." % to_native(collection_path))
with open(b_collection_path, 'rb') as collection_tar:
sha256 = secure_hash_s(collection_tar.read(), hash_func=hashlib.sha256)
content_type, b_form_data = prepare_multipart(
{
'sha256': sha256,
'file': {
'filename': b_collection_path,
'mime_type': 'application/octet-stream',
},
}
)
headers = {
'Content-type': content_type,
'Content-length': len(b_form_data),
}
if 'v3' in self.available_api_versions:
n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/'
else:
n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/'
resp = self._call_galaxy(n_url, args=b_form_data, headers=headers, method='POST', auth_required=True,
error_context_msg='Error when publishing collection to %s (%s)'
% (self.name, self.api_server))
return resp['task']
@g_connect(['v2', 'v3'])
def wait_import_task(self, task_id, timeout=0):
"""
Waits until the import process on the Galaxy server has completed or the timeout is reached.
:param task_id: The id of the import task to wait for. This can be parsed out of the return
value for GalaxyAPI.publish_collection.
:param timeout: The timeout in seconds, 0 is no timeout.
"""
state = 'waiting'
data = None
# Construct the appropriate URL per version
if 'v3' in self.available_api_versions:
full_url = _urljoin(self.api_server, self.available_api_versions['v3'],
'imports/collections', task_id, '/')
else:
full_url = _urljoin(self.api_server, self.available_api_versions['v2'],
'collection-imports', task_id, '/')
display.display("Waiting until Galaxy import task %s has completed" % full_url)
start = time.time()
wait = 2
while timeout == 0 or (time.time() - start) < timeout:
try:
data = self._call_galaxy(full_url, method='GET', auth_required=True,
error_context_msg='Error when getting import task results at %s' % full_url)
except GalaxyError as e:
if e.http_code != 404:
raise
# The import job may not have started, and as such, the task url may not yet exist
display.vvv('Galaxy import process has not started, wait %s seconds before trying again' % wait)
time.sleep(wait)
continue
state = data.get('state', 'waiting')
if data.get('finished_at', None):
break
display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again'
% (state, wait))
time.sleep(wait)
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
wait = min(30, wait * 1.5)
if state == 'waiting':
raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'"
% to_native(full_url))
for message in data.get('messages', []):
level = message['level']
if level == 'error':
display.error("Galaxy import error message: %s" % message['message'])
elif level == 'warning':
display.warning("Galaxy import warning message: %s" % message['message'])
else:
display.vvv("Galaxy import message: %s - %s" % (level, message['message']))
if state == 'failed':
code = to_native(data['error'].get('code', 'UNKNOWN'))
description = to_native(
data['error'].get('description', "Unknown error, see %s for more details" % full_url))
raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code))
@g_connect(['v2', 'v3'])
def get_collection_metadata(self, namespace, name):
"""
Gets the collection information from the Galaxy server about a specific Collection.
:param namespace: The collection namespace.
:param name: The collection name.
return: CollectionMetadata about the collection.
"""
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
field_map = [
('created_str', 'created_at'),
('modified_str', 'updated_at'),
]
else:
api_path = self.available_api_versions['v2']
field_map = [
('created_str', 'created'),
('modified_str', 'modified'),
]
info_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, '/')
error_context_msg = 'Error when getting the collection info for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
data = self._call_galaxy(info_url, error_context_msg=error_context_msg)
metadata = {}
for name, api_field in field_map:
metadata[name] = data.get(api_field, None)
return CollectionMetadata(namespace, name, **metadata)
@g_connect(['v2', 'v3'])
def get_collection_version_metadata(self, namespace, name, version):
"""
Gets the collection information from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: CollectionVersionMetadata about the collection at the version requested.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'])
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
"""
Gets a list of available versions for a collection on a Galaxy server.
:param namespace: The collection namespace.
:param name: The collection name.
:return: A list of versions that are available.
"""
relative_link = False
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
pagination_path = ['links', 'next']
relative_link = True # AH pagination results are relative an not an absolute URI.
else:
api_path = self.available_api_versions['v2']
pagination_path = ['next']
versions_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions', '/')
versions_url_info = urlparse(versions_url)
# We should only rely on the cache if the collection has not changed. This may slow things down but it ensures
# we are not waiting a day before finding any new collections that have been published.
if self._cache:
server_cache = self._cache.setdefault(get_cache_id(versions_url), {})
modified_cache = server_cache.setdefault('modified', {})
try:
modified_date = self.get_collection_metadata(namespace, name).modified_str
except GalaxyError as err:
if err.http_code != 404:
raise
# No collection found, return an empty list to keep things consistent with the various APIs
return []
cached_modified_date = modified_cache.get('%s.%s' % (namespace, name), None)
if cached_modified_date != modified_date:
modified_cache['%s.%s' % (namespace, name)] = modified_date
if versions_url_info.path in server_cache:
del server_cache[versions_url_info.path]
self._set_cache()
error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
try:
data = self._call_galaxy(versions_url, error_context_msg=error_context_msg, cache=True)
except GalaxyError as err:
if err.http_code != 404:
raise
# v3 doesn't raise a 404 so we need to mimick the empty response from APIs that do.
return []
if 'data' in data:
# v3 automation-hub is the only known API that uses `data`
# since v3 pulp_ansible does not, we cannot rely on version
# to indicate which key to use
results_key = 'data'
else:
results_key = 'results'
versions = []
while True:
versions += [v['version'] for v in data[results_key]]
next_link = data
for path in pagination_path:
next_link = next_link.get(path, {})
if not next_link:
break
elif relative_link:
# TODO: This assumes the pagination result is relative to the root server. Will need to be verified
# with someone who knows the AH API.
next_link = versions_url.replace(versions_url_info.path, next_link)
data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'),
error_context_msg=error_context_msg, cache=True)
return versions
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,071 |
ansible-galaxy collection install sometimes does not find all versions
|
##### SUMMARY
See https://app.shippable.com/github/ansible-collections/community.general/runs/6193/46/console for an example, I've already seen this multiple times today (also on AZP).
`ansible-galaxy -vvv collection install ansible.posix` was run three times in a row, and ~~always~~ two times out of three failed with `This collection only contains pre-releases.` (the first fail was a read timeout):
```
+ ansible-galaxy -vvv collection install ansible.posix
01:54 [DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
01:54 controller starting with Ansible 2.12. Current version: 2.7.15+ (default, Feb
01:54 9 2019, 11:33:22) [GCC 5.4.0 20160609]. This feature will be removed from
01:54 ansible-core in version 2.12. Deprecation warnings can be disabled by setting
01:54 deprecation_warnings=False in ansible.cfg.
01:54 /root/venv/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
01:54 from cryptography.exceptions import InvalidSignature
01:55 [WARNING]: You are running the development version of Ansible. You should only
01:55 run Ansible from "devel" if you are modifying the Ansible engine, or trying out
01:55 features under development. This is a rapidly changing source of code and can
01:55 become unstable at any point.
01:55 [DEPRECATION WARNING]: Setting verbosity before the arg sub command is
01:55 deprecated, set the verbosity after the sub command. This feature will be
01:55 removed from ansible-core in version 2.13. Deprecation warnings can be disabled
01:55 by setting deprecation_warnings=False in ansible.cfg.
01:55 ansible-galaxy 2.11.0.dev0
01:55 config file = None
01:55 configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
01:55 ansible python module location = /root/venv/lib/python2.7/site-packages/ansible
01:55 ansible collection location = /root/.ansible
01:55 executable location = /root/venv/bin/ansible-galaxy
01:55 python version = 2.7.15+ (default, Feb 9 2019, 11:33:22) [GCC 5.4.0 20160609]
01:55 jinja version = 2.11.2
01:55 libyaml = True
01:55 No config file found; using defaults
01:55 Starting galaxy collection install process
01:55 Found installed collection ansible.netcommon:1.4.1 at '/root/.ansible/ansible_collections/ansible/netcommon'
01:55 Found installed collection community.kubernetes:1.1.1 at '/root/.ansible/ansible_collections/community/kubernetes'
01:55 Found installed collection community.internal_test_tools:0.2.1 at '/root/.ansible/ansible_collections/community/internal_test_tools'
01:55 Skipping '/root/.ansible/ansible_collections/community/general/.git' for collection build
01:55 Skipping '/root/.ansible/ansible_collections/community/general/galaxy.yml' for collection build
01:55 Found installed collection community.general:2.0.0 at '/root/.ansible/ansible_collections/community/general'
01:55 Found installed collection google.cloud:1.0.1 at '/root/.ansible/ansible_collections/google/cloud'
01:55 Process install dependency map
01:55 Processing requirement collection 'ansible.posix'
01:55 Opened /root/.ansible/galaxy_token
01:56 Collection 'ansible.posix' obtained from server default https://galaxy.ansible.com/api/
01:56 ERROR! Cannot meet requirement * for dependency ansible.posix from source 'https://galaxy.ansible.com/api/'. Available versions before last requirement added:
01:56 Requirements from:
01:56 base - 'ansible.posix:*'
01:56 This collection only contains pre-releases. Utilize `--pre` to install pre-releases, or explicitly provide the pre-release version.
```
It could be that for some reason, only the first page of the results is queried (https://galaxy.ansible.com/api/v2/collections/ansible/posix/versions/), which contains only pre-releases. I guess this is probably a problem on Galaxy's side, but in case it is not, I'm creating this issue in ansible/ansible.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy collection
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/73071
|
https://github.com/ansible/ansible/pull/73557
|
29aef842d77b24105ce356d4b313be2269d466d6
|
00bd0b893d5d21de040b53032c466707bacb3b93
| 2020-12-28T10:01:37Z |
python
| 2021-02-15T14:45:01Z |
test/units/galaxy/test_api.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import re
import pytest
import stat
import tarfile
import tempfile
import time
from io import BytesIO, StringIO
from units.compat.mock import MagicMock
import ansible.constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy import api as galaxy_api
from ansible.galaxy.api import CollectionVersionMetadata, GalaxyAPI, GalaxyError
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six.moves.urllib import error as urllib_error
from ansible.utils import context_objects as co
from ansible.utils.display import Display
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
# Required to initialise the GalaxyAPI object
context.CLIARGS._store = {'ignore_certs': False}
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(tmp_path_factory):
''' Creates a collection artifact tarball that is ready to be published '''
output_dir = to_text(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Output'))
tar_path = os.path.join(output_dir, 'namespace-collection-v1.0.0.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
yield tar_path
@pytest.fixture()
def cache_dir(tmp_path_factory, monkeypatch):
cache_dir = to_text(tmp_path_factory.mktemp('Test Γ
ΓΕΓΞ²ΕΓ Galaxy Cache'))
monkeypatch.setitem(C.config._base_defs, 'GALAXY_CACHE_DIR', {'default': cache_dir})
yield cache_dir
def get_test_galaxy_api(url, version, token_ins=None, token_value=None):
token_value = token_value or "my token"
token_ins = token_ins or GalaxyToken(token_value)
api = GalaxyAPI(None, "test", url)
# Warning, this doesn't test g_connect() because _availabe_api_versions is set here. That means
# that urls for v2 servers have to append '/api/' themselves in the input data.
api._available_api_versions = {version: '%s' % version}
api.token = token_ins
return api
def test_api_no_auth():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {}
api._add_auth_token(actual, "")
assert actual == {}
def test_api_no_auth_but_required():
expected = "No access token or username set. A token can be set with --api-key or at "
with pytest.raises(AnsibleError, match=expected):
GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")._add_auth_token({}, "", required=True)
def test_api_token_auth():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_token_auth_with_token_type(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", token_type="Bearer", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v3_url(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v3/resource/name", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v2_url():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
# Add v3 to random part of URL but response should only see the v2 as the full URI path segment.
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v2/resourcev3/name", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_basic_auth_password():
token = BasicAuthToken(username=u"user", password=u"pass")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjpwYXNz'}
def test_api_basic_auth_no_password():
token = BasicAuthToken(username=u"user")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjo='}
def test_api_dont_override_auth_header():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {'Authorization': 'Custom token'}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Custom token'}
def test_initialise_galaxy(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent']
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_galaxy_with_auth(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent']
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_automation_hub(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v2": "v2/", "v3":"v3/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v2'] == u'v2/'
assert api.available_api_versions['v3'] == u'v3/'
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[0][2]['headers'] == {'Authorization': 'Bearer my_token'}
def test_initialise_unknown(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
urllib_error.HTTPError('https://galaxy.ansible.com/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
urllib_error.HTTPError('https://galaxy.ansible.com/api/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
expected = "Error when finding available api versions from test (%s) (HTTP Code: 500, Message: msg)" \
% api.api_server
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.authenticate("github_token")
def test_get_available_api_versions(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/","v2":"v2/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.available_api_versions
assert len(actual) == 2
assert actual['v1'] == u'v1/'
assert actual['v2'] == u'v2/'
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
def test_publish_collection_missing_file():
fake_path = u'/fake/Γ
ΓΕΓΞ²ΕΓ/path'
expected = to_native("The collection path specified '%s' does not exist." % fake_path)
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection(fake_path)
def test_publish_collection_not_a_tarball():
expected = "The collection path specified '{0}' is not a tarball, use 'ansible-galaxy collection build' to " \
"create a proper release artifact."
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with tempfile.NamedTemporaryFile(prefix=u'Γ
ΓΕΓΞ²ΕΓ') as temp_file:
temp_file.write(b"\x00")
temp_file.flush()
with pytest.raises(AnsibleError, match=expected.format(to_native(temp_file.name))):
api.publish_collection(temp_file.name)
def test_publish_collection_unsupported_version():
expected = "Galaxy action publish_collection requires API versions 'v2, v3' but only 'v1' are available on test " \
"https://galaxy.ansible.com/api/"
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v1")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection("path")
@pytest.mark.parametrize('api_version, collection_url', [
('v2', 'collections'),
('v3', 'artifacts/collections'),
])
def test_publish_collection(api_version, collection_url, collection_artifact, monkeypatch):
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", api_version)
mock_call = MagicMock()
mock_call.return_value = {'task': 'http://task.url/'}
monkeypatch.setattr(api, '_call_galaxy', mock_call)
actual = api.publish_collection(collection_artifact)
assert actual == 'http://task.url/'
assert mock_call.call_count == 1
assert mock_call.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/%s/%s/' % (api_version, collection_url)
assert mock_call.mock_calls[0][2]['headers']['Content-length'] == len(mock_call.mock_calls[0][2]['args'])
assert mock_call.mock_calls[0][2]['headers']['Content-type'].startswith(
'multipart/form-data; boundary=')
assert mock_call.mock_calls[0][2]['args'].startswith(b'--')
assert mock_call.mock_calls[0][2]['method'] == 'POST'
assert mock_call.mock_calls[0][2]['auth_required'] is True
@pytest.mark.parametrize('api_version, collection_url, response, expected', [
('v2', 'collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'),
('v2', 'collections', {
'message': u'Galaxy error messΓ€ge',
'code': 'GWE002',
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Galaxy error messΓ€ge Code: GWE002)'),
('v3', 'artifact/collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'),
('v3', 'artifact/collections', {
'errors': [
{
'code': 'conflict.collection_exists',
'detail': 'Collection "mynamespace-mycollection-4.1.1" already exists.',
'title': 'Conflict.',
'status': '400',
},
{
'code': 'quantum_improbability',
'title': u'RΓ€ndom(?) quantum improbability.',
'source': {'parameter': 'the_arrow_of_time'},
'meta': {'remediation': 'Try again before'},
},
],
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Collection '
u'"mynamespace-mycollection-4.1.1" already exists. Code: conflict.collection_exists), (HTTP Code: 500, '
u'Message: RΓ€ndom(?) quantum improbability. Code: quantum_improbability)')
])
def test_publish_failure(api_version, collection_url, response, expected, collection_artifact, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version)
expected_url = '%s/api/%s/%s' % (api.api_server, api_version, collection_url)
mock_open = MagicMock()
mock_open.side_effect = urllib_error.HTTPError(expected_url, 500, 'msg', {},
StringIO(to_text(json.dumps(response))))
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
with pytest.raises(GalaxyError, match=re.escape(to_native(expected % api.api_server))):
api.publish_collection(collection_artifact)
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234/'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.return_value = StringIO(u'{"state":"success","finished_at":"time"}')
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234/'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_multiple_requests(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"state":"test"}'),
StringIO(u'{"state":"success","finished_at":"time"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
api.wait_import_task(import_uri)
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == \
'Galaxy import process has a status of test, wait 2 seconds before trying again'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri,', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234/'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {
'code': 'GW001',
'description': u'BecΓ€use I said so!',
},
'messages': [
{
'level': 'error',
'message': u'SomΓ© error',
},
{
'level': 'warning',
'message': u'Some wΓ€rning',
},
{
'level': 'info',
'message': u'SomΓ© info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = to_native(u'Galaxy import process failed: BecΓ€use I said so! (Code: GW001)')
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - SomΓ© info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wΓ€rning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: SomΓ© error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my_token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234/'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure_no_error(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {},
'messages': [
{
'level': 'error',
'message': u'SomΓ© error',
},
{
'level': 'warning',
'message': u'Some wΓ€rning',
},
{
'level': 'info',
'message': u'SomΓ© info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = 'Galaxy import process failed: Unknown error, see %s for more details \\(Code: UNKNOWN\\)' % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - SomΓ© info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wΓ€rning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: SomΓ© error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234/'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_timeout(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
def return_response(*args, **kwargs):
return StringIO(u'{"state":"waiting"}')
mock_open = MagicMock()
mock_open.side_effect = return_response
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
expected = "Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri, 1)
assert mock_open.call_count > 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
# expected_wait_msg = 'Galaxy import process has a status of waiting, wait {0} seconds before trying again'
assert mock_vvv.call_count > 9 # 1st is opening Galaxy token file.
# FIXME:
# assert mock_vvv.mock_calls[1][1][0] == expected_wait_msg.format(2)
# assert mock_vvv.mock_calls[2][1][0] == expected_wait_msg.format(3)
# assert mock_vvv.mock_calls[3][1][0] == expected_wait_msg.format(4)
# assert mock_vvv.mock_calls[4][1][0] == expected_wait_msg.format(6)
# assert mock_vvv.mock_calls[5][1][0] == expected_wait_msg.format(10)
# assert mock_vvv.mock_calls[6][1][0] == expected_wait_msg.format(15)
# assert mock_vvv.mock_calls[7][1][0] == expected_wait_msg.format(22)
# assert mock_vvv.mock_calls[8][1][0] == expected_wait_msg.format(30)
@pytest.mark.parametrize('api_version, token_type, version, token_ins', [
('v2', None, 'v2.1.13', None),
('v3', 'Bearer', 'v1.0.0', KeycloakToken(auth_url='https://api.test/api/automation-hub/')),
])
def test_get_collection_version_metadata_no_version(api_version, token_type, version, token_ins, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'download_url': 'https://downloadme.com',
'artifact': {
'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f',
},
'namespace': {
'name': 'namespace',
},
'collection': {
'name': 'collection',
},
'version': version,
'metadata': {
'dependencies': {},
}
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_version_metadata('namespace', 'collection', version)
assert isinstance(actual, CollectionVersionMetadata)
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.download_url == u'https://downloadme.com'
assert actual.artifact_sha256 == u'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f'
assert actual.version == version
assert actual.dependencies == {}
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s/' \
% (api.api_server, api_version, version)
# v2 calls dont need auth, so no authz header or token_type
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, response', [
('v2', None, None, {
'count': 2,
'next': None,
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
# TODO: Verify this once Automation Hub is actually out
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), {
'count': 2,
'next': None,
'previous': None,
'data': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
])
def test_get_collection_versions(api_version, token_type, token_ins, response, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps(response))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1']
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/' % api_version
if token_ins:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, responses', [
('v2', None, None, [
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=3',
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions',
'results': [
{
'version': '1.0.2',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'next': None,
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'results': [
{
'version': '1.0.4',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), [
{
'count': 6,
'links': {
'next': '/api/v3/collections/namespace/collection/versions/?page=2',
'previous': None,
},
'data': [
{
'version': '1.0.0',
'href': '/api/v3/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': '/api/v3/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'links': {
'next': '/api/v3/collections/namespace/collection/versions/?page=3',
'previous': '/api/v3/collections/namespace/collection/versions',
},
'data': [
{
'version': '1.0.2',
'href': '/api/v3/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': '/api/v3/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'links': {
'next': None,
'previous': '/api/v3/collections/namespace/collection/versions/?page=2',
},
'data': [
{
'version': '1.0.4',
'href': '/api/v3/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': '/api/v3/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
])
def test_get_collection_versions_pagination(api_version, token_type, token_ins, responses, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5']
assert mock_open.call_count == 3
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/' % api_version
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=2' % api_version
assert mock_open.mock_calls[2][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=3' % api_version
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[2][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('responses', [
[
{
'count': 2,
'results': [{'name': '3.5.1', }, {'name': '3.5.2'}],
'next_link': None,
'next': None,
'previous_link': None,
'previous': None
},
],
[
{
'count': 2,
'results': [{'name': '3.5.1'}],
'next_link': '/api/v1/roles/432/versions/?page=2&page_size=50',
'next': '/roles/432/versions/?page=2&page_size=50',
'previous_link': None,
'previous': None
},
{
'count': 2,
'results': [{'name': '3.5.2'}],
'next_link': None,
'next': None,
'previous_link': '/api/v1/roles/432/versions/?&page_size=50',
'previous': '/roles/432/versions/?page_size=50',
},
]
])
def test_get_role_versions_pagination(monkeypatch, responses):
api = get_test_galaxy_api('https://galaxy.com/api/', 'v1')
mock_open = MagicMock()
mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.fetch_role_related('versions', 432)
assert actual == [{'name': '3.5.1'}, {'name': '3.5.2'}]
assert mock_open.call_count == len(responses)
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page_size=50'
if len(responses) == 2:
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page=2&page_size=50'
def test_missing_cache_dir(cache_dir):
os.rmdir(cache_dir)
GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False)
assert os.path.isdir(cache_dir)
assert stat.S_IMODE(os.stat(cache_dir).st_mode) == 0o700
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == '{"version": 1}'
assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o600
def test_existing_cache(cache_dir):
cache_file = os.path.join(cache_dir, 'api.json')
cache_file_contents = '{"version": 1, "test": "json"}'
with open(cache_file, mode='w') as fd:
fd.write(cache_file_contents)
os.chmod(cache_file, 0o655)
GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False)
assert os.path.isdir(cache_dir)
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == cache_file_contents
assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o655
@pytest.mark.parametrize('content', [
'',
'value',
'{"de" "finit" "ely" [\'invalid"]}',
'[]',
'{"version": 2, "test": "json"}',
'{"version": 2, "key": "Γ
ΓΕΓΞ²ΕΓ"}',
])
def test_cache_invalid_cache_content(content, cache_dir):
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file, mode='w') as fd:
fd.write(content)
os.chmod(cache_file, 0o664)
GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False)
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == '{"version": 1}'
assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o664
def test_world_writable_cache(cache_dir, monkeypatch):
mock_warning = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warning)
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file, mode='w') as fd:
fd.write('{"version": 2}')
os.chmod(cache_file, 0o666)
api = GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False)
assert api._cache is None
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == '{"version": 2}'
assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o666
assert mock_warning.call_count == 1
assert mock_warning.call_args[0][0] == \
'Galaxy cache has world writable access (%s), ignoring it as a cache source.' % cache_file
def test_no_cache(cache_dir):
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file, mode='w') as fd:
fd.write('random')
api = GalaxyAPI(None, "test", 'https://galaxy.ansible.com/')
assert api._cache is None
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == 'random'
def test_clear_cache_with_no_cache(cache_dir):
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file, mode='w') as fd:
fd.write('{"version": 1, "key": "value"}')
GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', clear_response_cache=True)
assert not os.path.exists(cache_file)
def test_clear_cache(cache_dir):
cache_file = os.path.join(cache_dir, 'api.json')
with open(cache_file, mode='w') as fd:
fd.write('{"version": 1, "key": "value"}')
GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', clear_response_cache=True, no_cache=False)
with open(cache_file) as fd:
actual_cache = fd.read()
assert actual_cache == '{"version": 1}'
assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o600
@pytest.mark.parametrize(['url', 'expected'], [
('http://hostname/path', 'hostname:'),
('http://hostname:80/path', 'hostname:80'),
('https://testing.com:invalid', 'testing.com:'),
('https://testing.com:1234', 'testing.com:1234'),
('https://username:[email protected]/path', 'testing.com:'),
('https://username:[email protected]:443/path', 'testing.com:443'),
])
def test_cache_id(url, expected):
actual = galaxy_api.get_cache_id(url)
assert actual == expected
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,632 |
Dead link in get_url doc
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Broken link in https://docs.ansible.com/ansible/devel/collections/ansible/builtin/get_url_module.html
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
ansible.builtin.get_url docs
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/73632
|
https://github.com/ansible/ansible/pull/73633
|
fa05af8321394d09590eacad8179e54b5de294f5
|
1a149960254ed0bf202d46df7d9c57978af80f32
| 2021-02-17T08:43:33Z |
python
| 2021-02-18T23:07:23Z |
lib/ansible/modules/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
type: str
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest='', method='GET'):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead',
version='2.13', collection_name='ansible.builtin')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead',
version='2.14', collection_name='ansible.builtin')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map.append((parts[0], parts[1]))
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
method = 'HEAD' if module.check_mode else 'GET'
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest, method)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,960 |
ansible.builtin.password doc does not list possible values for chars parameter
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/password_lookup.html :
> Define comma separated list of names that compose a custom character set in the generated passwords.
> They can be either parts of Python's string module attributes (ascii_letters,digits, etc) or are used literally ( :, -).
I suggest to list all possible names or at least add a link to python string module documentation: https://docs.python.org/3/library/string.html
Now you have to search for the relevant python documentation yourself.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
ansible.builtin.password
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72960
|
https://github.com/ansible/ansible/pull/73468
|
c8ee186e11bfdf8a3e17c0a6226caaf587fa2e89
|
5078a0baa26e0eb715e86c93ec32af6bc4022e45
| 2020-12-14T09:42:07Z |
python
| 2021-02-18T23:22:16Z |
lib/ansible/plugins/lookup/password.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2013, Javier Candeira <[email protected]>
# (c) 2013, Maykel Moya <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: password
version_added: "1.1"
author:
- Daniel Hokka Zakrisson (!UNKNOWN) <[email protected]>
- Javier Candeira (!UNKNOWN) <[email protected]>
- Maykel Moya (!UNKNOWN) <[email protected]>
short_description: retrieve or generate a random password, stored in a file
description:
- Generates a random plaintext password and stores it in a file at a given filepath.
- If the file exists previously, it will retrieve its contents, behaving just like with_file.
- 'Usage of variables like C("{{ inventory_hostname }}") in the filepath can be used to set up random passwords per host,
which simplifies password management in C("host_vars") variables.'
- A special case is using /dev/null as a path. The password lookup will generate a new random password each time,
but will not write it to /dev/null. This can be used when you need a password without storing it on the controller.
options:
_terms:
description:
- path to the file that stores/will store the passwords
required: True
encrypt:
description:
- Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash; md5_crypt, bcrypt, sha256_crypt, sha512_crypt)
- If not provided, the password will be returned in plain text.
- Note that the password is always stored as plain text, only the returning password is encrypted.
- Encrypt also forces saving the salt value for idempotence.
- Note that before 2.6 this option was incorrectly labeled as a boolean for a long time.
chars:
version_added: "1.4"
description:
- Define comma separated list of names that compose a custom character set in the generated passwords.
- 'By default generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9 and punctuation (". , : - _").'
- "They can be either parts of Python's string module attributes (ascii_letters,digits, etc) or are used literally ( :, -)."
- "Other valid values include 'ascii_lowercase', 'ascii_uppercase', 'digits', 'hexdigits', 'octdigits', 'printable', 'punctuation' and 'whitespace'."
- Be aware that Python's 'hexdigits' includes lower and upper case version of a-f, so it is not a good choice as it doubles
the chances of those values for systems that won't distinguish case, distorting the expected entropy.
- "To enter comma use two commas ',,' somewhere - preferably at the end. Quotes and double quotes are not supported."
type: string
length:
description: The length of the generated password.
default: 20
type: integer
notes:
- A great alternative to the password lookup plugin,
if you don't need to generate random passwords on a per-host basis,
would be to use Vault in playbooks.
Read the documentation there and consider using it first,
it will be more desirable for most applications.
- If the file already exists, no data will be written to it.
If the file has contents, those contents will be read in as the password.
Empty files cause the password to return as an empty string.
- 'As all lookups, this runs on the Ansible host as the user running the playbook, and "become" does not apply,
the target file must be readable by the playbook user, or, if it does not exist,
the playbook user must have sufficient privileges to create it.
(So, for example, attempts to write into areas such as /etc will fail unless the entire playbook is being run as root).'
"""
EXAMPLES = """
- name: create a mysql user with a random password
mysql_user:
name: "{{ client }}"
password: "{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using only ascii letters
mysql_user:
name: "{{ client }}"
password: "{{ lookup('password', '/tmp/passwordfile chars=ascii_letters') }}"
priv: '{{ client }}_{{ tier }}_{{ role }}.*:ALL'
- name: create a mysql user with an 8 character random password using only digits
mysql_user:
name: "{{ client }}"
password: "{{ lookup('password', '/tmp/passwordfile length=8 chars=digits') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using many different char sets
mysql_user:
name: "{{ client }}"
password: "{{ lookup('password', '/tmp/passwordfile chars=ascii_letters,digits,punctuation') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create lowercase 8 character name for Kubernetes pod name
set_fact:
random_pod_name: "web-{{ lookup('password', '/dev/null chars=ascii_lowercase,digits length=8') }}"
"""
RETURN = """
_raw:
description:
- a password
type: list
elements: str
"""
import os
import string
import time
import shutil
import hashlib
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.splitter import parse_kv
from ansible.plugins.lookup import LookupBase
from ansible.utils.encrypt import BaseHash, do_encrypt, random_password, random_salt
from ansible.utils.path import makedirs_safe
DEFAULT_LENGTH = 20
VALID_PARAMS = frozenset(('length', 'encrypt', 'chars'))
def _parse_parameters(term):
"""Hacky parsing of params
See https://github.com/ansible/ansible-modules-core/issues/1968#issuecomment-136842156
and the first_found lookup For how we want to fix this later
"""
first_split = term.split(' ', 1)
if len(first_split) <= 1:
# Only a single argument given, therefore it's a path
relpath = term
params = dict()
else:
relpath = first_split[0]
params = parse_kv(first_split[1])
if '_raw_params' in params:
# Spaces in the path?
relpath = u' '.join((relpath, params['_raw_params']))
del params['_raw_params']
# Check that we parsed the params correctly
if not term.startswith(relpath):
# Likely, the user had a non parameter following a parameter.
# Reject this as a user typo
raise AnsibleError('Unrecognized value after key=value parameters given to password lookup')
# No _raw_params means we already found the complete path when
# we split it initially
# Check for invalid parameters. Probably a user typo
invalid_params = frozenset(params.keys()).difference(VALID_PARAMS)
if invalid_params:
raise AnsibleError('Unrecognized parameter(s) given to password lookup: %s' % ', '.join(invalid_params))
# Set defaults
params['length'] = int(params.get('length', DEFAULT_LENGTH))
params['encrypt'] = params.get('encrypt', None)
params['chars'] = params.get('chars', None)
if params['chars']:
tmp_chars = []
if u',,' in params['chars']:
tmp_chars.append(u',')
tmp_chars.extend(c for c in params['chars'].replace(u',,', u',').split(u',') if c)
params['chars'] = tmp_chars
else:
# Default chars for password
params['chars'] = [u'ascii_letters', u'digits', u".,:-_"]
return relpath, params
def _read_password_file(b_path):
"""Read the contents of a password file and return it
:arg b_path: A byte string containing the path to the password file
:returns: a text string containing the contents of the password file or
None if no password file was present.
"""
content = None
if os.path.exists(b_path):
with open(b_path, 'rb') as f:
b_content = f.read().rstrip()
content = to_text(b_content, errors='surrogate_or_strict')
return content
def _gen_candidate_chars(characters):
'''Generate a string containing all valid chars as defined by ``characters``
:arg characters: A list of character specs. The character specs are
shorthand names for sets of characters like 'digits', 'ascii_letters',
or 'punctuation' or a string to be included verbatim.
The values of each char spec can be:
* a name of an attribute in the 'strings' module ('digits' for example).
The value of the attribute will be added to the candidate chars.
* a string of characters. If the string isn't an attribute in 'string'
module, the string will be directly added to the candidate chars.
For example::
characters=['digits', '?|']``
will match ``string.digits`` and add all ascii digits. ``'?|'`` will add
the question mark and pipe characters directly. Return will be the string::
u'0123456789?|'
'''
chars = []
for chars_spec in characters:
# getattr from string expands things like "ascii_letters" and "digits"
# into a set of characters.
chars.append(to_text(getattr(string, to_native(chars_spec), chars_spec),
errors='strict'))
chars = u''.join(chars).replace(u'"', u'').replace(u"'", u'')
return chars
def _parse_content(content):
'''parse our password data format into password and salt
:arg content: The data read from the file
:returns: password and salt
'''
password = content
salt = None
salt_slug = u' salt='
try:
sep = content.rindex(salt_slug)
except ValueError:
# No salt
pass
else:
salt = password[sep + len(salt_slug):]
password = content[:sep]
return password, salt
def _format_content(password, salt, encrypt=None):
"""Format the password and salt for saving
:arg password: the plaintext password to save
:arg salt: the salt to use when encrypting a password
:arg encrypt: Which method the user requests that this password is encrypted.
Note that the password is saved in clear. Encrypt just tells us if we
must save the salt value for idempotence. Defaults to None.
:returns: a text string containing the formatted information
.. warning:: Passwords are saved in clear. This is because the playbooks
expect to get cleartext passwords from this lookup.
"""
if not encrypt and not salt:
return password
# At this point, the calling code should have assured us that there is a salt value.
if not salt:
raise AnsibleAssertionError('_format_content was called with encryption requested but no salt value')
return u'%s salt=%s' % (password, salt)
def _write_password_file(b_path, content):
b_pathdir = os.path.dirname(b_path)
makedirs_safe(b_pathdir, mode=0o700)
with open(b_path, 'wb') as f:
os.chmod(b_path, 0o600)
b_content = to_bytes(content, errors='surrogate_or_strict') + b'\n'
f.write(b_content)
def _get_lock(b_path):
"""Get the lock for writing password file."""
first_process = False
b_pathdir = os.path.dirname(b_path)
lockfile_name = to_bytes("%s.ansible_lockfile" % hashlib.sha1(b_path).hexdigest())
lockfile = os.path.join(b_pathdir, lockfile_name)
if not os.path.exists(lockfile) and b_path != to_bytes('/dev/null'):
try:
makedirs_safe(b_pathdir, mode=0o700)
fd = os.open(lockfile, os.O_CREAT | os.O_EXCL)
os.close(fd)
first_process = True
except OSError as e:
if e.strerror != 'File exists':
raise
counter = 0
# if the lock is got by other process, wait until it's released
while os.path.exists(lockfile) and not first_process:
time.sleep(2 ** counter)
if counter >= 2:
raise AnsibleError("Password lookup cannot get the lock in 7 seconds, abort..."
"This may caused by un-removed lockfile"
"you can manually remove it from controller machine at %s and try again" % lockfile)
counter += 1
return first_process, lockfile
def _release_lock(lockfile):
"""Release the lock so other processes can read the password file."""
if os.path.exists(lockfile):
os.remove(lockfile)
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
ret = []
for term in terms:
relpath, params = _parse_parameters(term)
path = self._loader.path_dwim(relpath)
b_path = to_bytes(path, errors='surrogate_or_strict')
chars = _gen_candidate_chars(params['chars'])
changed = None
# make sure only one process finishes all the job first
first_process, lockfile = _get_lock(b_path)
content = _read_password_file(b_path)
if content is None or b_path == to_bytes('/dev/null'):
plaintext_password = random_password(params['length'], chars)
salt = None
changed = True
else:
plaintext_password, salt = _parse_content(content)
encrypt = params['encrypt']
if encrypt and not salt:
changed = True
try:
salt = random_salt(BaseHash.algorithms[encrypt].salt_size)
except KeyError:
salt = random_salt()
if changed and b_path != to_bytes('/dev/null'):
content = _format_content(plaintext_password, salt, encrypt=encrypt)
_write_password_file(b_path, content)
if first_process:
# let other processes continue
_release_lock(lockfile)
if encrypt:
password = do_encrypt(plaintext_password, encrypt, salt=salt)
ret.append(password)
else:
ret.append(plaintext_password)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,197 |
[2]Create banners to differentiate core from Ansible docs
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
As part of #72032, we need a banner that shows up on Ansible docs to say it's ansible not ansible-core, and a similar banner on ansible-core docs to say it's core, not ansible. This may get crowded with the existing banners so might require trimming them. Should also have a link to the other docsite.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/73197
|
https://github.com/ansible/ansible/pull/73552
|
f4eab92aa019af4a3d551fdccaea8c37aae4eb3c
|
29d395146462757aee8b16117099b31e98112eac
| 2021-01-12T19:39:30Z |
python
| 2021-02-23T20:54:17Z |
docs/docsite/_themes/sphinx_rtd_theme/ansible_banner.html
|
<!--- Based on sphinx versionwarning extension. Extension currently only works on READTHEDOCS -->
<script>
startsWith = function(str, needle) {
return str.slice(0, needle.length) == needle
}
// Create a banner if we're not on the official docs site
if (location.host == "docs.testing.ansible.com") {
document.write('<div id="testing_banner_id" class="admonition important">');
document.write('<p>This is the testing site for Ansible Documentation. Unless you are reviewing pre-production changes, please visit the <a href="https://docs.ansible.com/ansible/latest/">official documentation website</a>.</p> <p></p>');
document.write('</div>');
}
{% if (not READTHEDOCS) and (available_versions is defined) %}
// Create a banner
current_url_path = window.location.pathname;
if (startsWith(current_url_path, "/ansible/latest/") || startsWith(current_url_path, "/ansible/{{ latest_version }}/")) {
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the latest community version of the Ansible documentation. Red Hat subscribers, select <b>2.9</b> in the version selection to the left for the most recent Red Hat release.</p>');
document.write('</div>');
} else if (startsWith(current_url_path, "/ansible/2.9/")) {
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the latest Red Hat released version of the Ansible documentation. Community users can use this, or select any version in version selection to the left, including <b>latest</b> for the most recent community version.</p>');
document.write('</div>');
} else if (startsWith(current_url_path, "/ansible/devel/")) {
/* temp banner to advertise survey
document.write('<div id="banner_id" class="admonition important">');
document.write('<br><p>Please take our <a href="https://www.surveymonkey.co.uk/r/B9V3CDY">Docs survey</a> before December 31 to help us improve Ansible documentation.</p><br>');
document.write('</div>'); */
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the <b>devel</b> version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.</p>');
document.write('</div>');
} else {
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading an older version of the Ansible documentation. Use the version selection to the left if you want the latest stable released version.</p>');
document.write('</div>');
}
{% endif %}
</script>
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,709 |
Configuration Settings example on docs.ansible.com produces error
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
On the following page: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In the first 'Note:' section there is the following text:
```
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
```
When I paste only the above line in a new ~/.ansible.cfg file, I get the following warning message for the ansible ping command:
```
[host]> ansible all -m ping
[WARNING]: Unable to parse /etc/ansible/hosts ; This points to the file that lists your
hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
```
When I remove the comment text starting from ';' to the end of the line from the file '~/.ansible.cfg' and re-run the command, the above issue is resolved.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
_Btw, the 'Edit on GitHub' in the above link takes me to a 404 Not Found page._
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.6
<removed personally identifiable information from the output>
python version = 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST <this is the only line, but I've removed personally identifiable information here>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Chrome browser
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
I'm a new user of ansible but I believe the comment guidance on the above-mentioned link is an error.
|
https://github.com/ansible/ansible/issues/73709
|
https://github.com/ansible/ansible/pull/73715
|
eb72c36a71c8bf786d575a31246f602ad69cc9c9
|
950ab74758a6014639236612594118b2b6f4751e
| 2021-02-24T03:08:25Z |
python
| 2021-02-25T17:03:03Z |
changelogs/fragments/73709-normalize-configparser.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,709 |
Configuration Settings example on docs.ansible.com produces error
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
On the following page: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In the first 'Note:' section there is the following text:
```
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
```
When I paste only the above line in a new ~/.ansible.cfg file, I get the following warning message for the ansible ping command:
```
[host]> ansible all -m ping
[WARNING]: Unable to parse /etc/ansible/hosts ; This points to the file that lists your
hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
```
When I remove the comment text starting from ';' to the end of the line from the file '~/.ansible.cfg' and re-run the command, the above issue is resolved.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
_Btw, the 'Edit on GitHub' in the above link takes me to a 404 Not Found page._
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.6
<removed personally identifiable information from the output>
python version = 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST <this is the only line, but I've removed personally identifiable information here>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Chrome browser
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
I'm a new user of ansible but I believe the comment guidance on the above-mentioned link is an error.
|
https://github.com/ansible/ansible/issues/73709
|
https://github.com/ansible/ansible/pull/73715
|
eb72c36a71c8bf786d575a31246f602ad69cc9c9
|
950ab74758a6014639236612594118b2b6f4751e
| 2021-02-24T03:08:25Z |
python
| 2021-02-25T17:03:03Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
self._parsers[cfile] = configparser.ConfigParser()
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
if value not in defs[config]['choices']:
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), defs[config]['choices']))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,709 |
Configuration Settings example on docs.ansible.com produces error
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
On the following page: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In the first 'Note:' section there is the following text:
```
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
```
When I paste only the above line in a new ~/.ansible.cfg file, I get the following warning message for the ansible ping command:
```
[host]> ansible all -m ping
[WARNING]: Unable to parse /etc/ansible/hosts ; This points to the file that lists your
hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
```
When I remove the comment text starting from ';' to the end of the line from the file '~/.ansible.cfg' and re-run the command, the above issue is resolved.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
_Btw, the 'Edit on GitHub' in the above link takes me to a 404 Not Found page._
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.6
<removed personally identifiable information from the output>
python version = 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST <this is the only line, but I've removed personally identifiable information here>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Chrome browser
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
I'm a new user of ansible but I believe the comment guidance on the above-mentioned link is an error.
|
https://github.com/ansible/ansible/issues/73709
|
https://github.com/ansible/ansible/pull/73715
|
eb72c36a71c8bf786d575a31246f602ad69cc9c9
|
950ab74758a6014639236612594118b2b6f4751e
| 2021-02-24T03:08:25Z |
python
| 2021-02-25T17:03:03Z |
test/integration/targets/config/inline_comment_ansible.cfg
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,709 |
Configuration Settings example on docs.ansible.com produces error
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
On the following page: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
In the first 'Note:' section there is the following text:
```
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
```
When I paste only the above line in a new ~/.ansible.cfg file, I get the following warning message for the ansible ping command:
```
[host]> ansible all -m ping
[WARNING]: Unable to parse /etc/ansible/hosts ; This points to the file that lists your
hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
```
When I remove the comment text starting from ';' to the end of the line from the file '~/.ansible.cfg' and re-run the command, the above issue is resolved.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
_Btw, the 'Edit on GitHub' in the above link takes me to a 404 Not Found page._
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.6
<removed personally identifiable information from the output>
python version = 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST <this is the only line, but I've removed personally identifiable information here>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Chrome browser
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
I'm a new user of ansible but I believe the comment guidance on the above-mentioned link is an error.
|
https://github.com/ansible/ansible/issues/73709
|
https://github.com/ansible/ansible/pull/73715
|
eb72c36a71c8bf786d575a31246f602ad69cc9c9
|
950ab74758a6014639236612594118b2b6f4751e
| 2021-02-24T03:08:25Z |
python
| 2021-02-25T17:03:03Z |
test/integration/targets/config/runme.sh
|
#!/usr/bin/env bash
set -eux
# ignore empty env var and use default
# shellcheck disable=SC1007
ANSIBLE_TIMEOUT= ansible -m ping testhost -i ../../inventory "$@"
# env var is wrong type, this should be a fatal error pointing at the setting
ANSIBLE_TIMEOUT='lola' ansible -m ping testhost -i ../../inventory "$@" 2>&1|grep 'Invalid type for configuration option setting: DEFAULT_TIMEOUT'
# https://github.com/ansible/ansible/issues/69577
ANSIBLE_REMOTE_TMP="$HOME/.ansible/directory_with_no_space" ansible -m ping testhost -i ../../inventory "$@"
ANSIBLE_REMOTE_TMP="$HOME/.ansible/directory with space" ansible -m ping testhost -i ../../inventory "$@"
ANSIBLE_CONFIG=nonexistent.cfg ansible-config dump --only-changed -v | grep 'No config file found; using defaults'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,627 |
Module find not respecting depth parameter in 2.10
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible.builtin.find
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.5
config file = /Users/bryan.murdock/.ansible.cfg
configured module search path = ['/Users/bryan.murdock/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bryan.murdock/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/bryan.murdock/Library/Python/3.7/bin/ansible
python version = 3.7.3 (default, Apr 24 2020, 18:51:23) [Clang 11.0.3 (clang-1103.0.32.62)]
```
I tested with version 2.10.4 and the bug is there too. I also tested with 2.9.17 and the bug is *not* present in that version.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/Users/bryan.murdock/.ansible.cfg) = True
DEFAULT_FORKS(/Users/bryan.murdock/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/bryan.murdock/.ansible.cfg) = ['/Users/bryan.murdock/work/soc-common/scripts/ansible-hosts']
DEFAULT_TIMEOUT(/Users/bryan.murdock/.ansible.cfg) = 120
DEPRECATION_WARNINGS(/Users/bryan.murdock/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running ansible-playbook on Mac OS 10.15.7, connecting to centos 7.8.2003
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
If you run the below task in a playbook using ansible 2.10, it takes a really long time. If you use ansible 2.9, it completes very quickly. You can set recurse to no and it will complete very quickly (but won't find the files I need). If you set recures to yes and depth to 1 then it should behave the same as recurse: no, but it doesn't. It takes a really long time.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: find .bashrc files
find:
paths: /home/
patterns: .bashrc
file_type: file
hidden: yes
recurse: yes
depth: 2
register: bashrc
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
find completes quickly
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
find takes a really long time.
<!--- Paste verbatim command output between quotes -->
There is no interesting or helpful output
|
https://github.com/ansible/ansible/issues/73627
|
https://github.com/ansible/ansible/pull/73718
|
950ab74758a6014639236612594118b2b6f4751e
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
| 2021-02-16T18:46:21Z |
python
| 2021-02-25T19:32:49Z |
changelogs/fragments/73718-find-dir-depth-traversal.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,627 |
Module find not respecting depth parameter in 2.10
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible.builtin.find
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.5
config file = /Users/bryan.murdock/.ansible.cfg
configured module search path = ['/Users/bryan.murdock/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bryan.murdock/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/bryan.murdock/Library/Python/3.7/bin/ansible
python version = 3.7.3 (default, Apr 24 2020, 18:51:23) [Clang 11.0.3 (clang-1103.0.32.62)]
```
I tested with version 2.10.4 and the bug is there too. I also tested with 2.9.17 and the bug is *not* present in that version.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/Users/bryan.murdock/.ansible.cfg) = True
DEFAULT_FORKS(/Users/bryan.murdock/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/bryan.murdock/.ansible.cfg) = ['/Users/bryan.murdock/work/soc-common/scripts/ansible-hosts']
DEFAULT_TIMEOUT(/Users/bryan.murdock/.ansible.cfg) = 120
DEPRECATION_WARNINGS(/Users/bryan.murdock/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running ansible-playbook on Mac OS 10.15.7, connecting to centos 7.8.2003
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
If you run the below task in a playbook using ansible 2.10, it takes a really long time. If you use ansible 2.9, it completes very quickly. You can set recurse to no and it will complete very quickly (but won't find the files I need). If you set recures to yes and depth to 1 then it should behave the same as recurse: no, but it doesn't. It takes a really long time.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: find .bashrc files
find:
paths: /home/
patterns: .bashrc
file_type: file
hidden: yes
recurse: yes
depth: 2
register: bashrc
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
find completes quickly
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
find takes a really long time.
<!--- Paste verbatim command output between quotes -->
There is no interesting or helpful output
|
https://github.com/ansible/ansible/issues/73627
|
https://github.com/ansible/ansible/pull/73718
|
950ab74758a6014639236612594118b2b6f4751e
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
| 2021-02-16T18:46:21Z |
python
| 2021-02-25T19:32:49Z |
lib/ansible/modules/find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Ruggero Marchei <[email protected]>
# Copyright: (c) 2015, Brian Coca <[email protected]>
# Copyright: (c) 2016-2017, Konstantin Shalygin <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: find
author: Brian Coca (@bcoca)
version_added: "2.0"
short_description: Return a list of files based on specific criteria
description:
- Return a list of files based on specific criteria. Multiple criteria are AND'd together.
- For Windows targets, use the M(ansible.windows.win_find) module instead.
options:
age:
description:
- Select files whose age is equal to or greater than the specified time.
- Use a negative age to find files equal to or less than the specified time.
- You can choose seconds, minutes, hours, days, or weeks by specifying the
first letter of any of those words (e.g., "1w").
type: str
patterns:
default: '*'
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- The patterns restrict the list of files to be returned to those whose basenames match at
least one of the patterns specified. Multiple patterns can be specified using a list.
- The pattern is matched against the file base name, excluding the directory.
- When using regexen, the pattern MUST match the ENTIRE file name, not just parts of it. So
if you are looking to match all files ending in .default, you'd need to use '.*\.default'
as a regexp and not just '\.default'.
- This parameter expects a list, which can be either comma separated or YAML. If any of the
patterns contain a comma, make sure to put them in a list to avoid splitting the patterns
in undesirable ways.
type: list
aliases: [ pattern ]
elements: str
excludes:
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- Items whose basenames match an C(excludes) pattern are culled from C(patterns) matches.
Multiple patterns can be specified using a list.
type: list
aliases: [ exclude ]
version_added: "2.5"
elements: str
contains:
description:
- A regular expression or pattern which should be matched against the file content.
- Works only when I(file_type) is C(file).
type: str
read_whole_file:
description:
- When doing a C(contains) search, determines whether the whole file should be read into
memory or if the regex should be applied to the file line-by-line.
- Setting this to C(true) can have performance and memory implications for large files.
- This uses C(re.search()) instead of C(re.match()).
type: bool
default: false
version_added: "2.11"
paths:
description:
- List of paths of directories to search. All paths must be fully qualified.
type: list
required: true
aliases: [ name, path ]
elements: str
file_type:
description:
- Type of file to select.
- The 'link' and 'any' choices were added in Ansible 2.3.
type: str
choices: [ any, directory, file, link ]
default: file
recurse:
description:
- If target is a directory, recursively descend into the directory looking for files.
type: bool
default: no
size:
description:
- Select files whose size is equal to or greater than the specified size.
- Use a negative size to find files equal to or less than the specified size.
- Unqualified values are in bytes but b, k, m, g, and t can be appended to specify
bytes, kilobytes, megabytes, gigabytes, and terabytes, respectively.
- Size is not evaluated for directories.
type: str
age_stamp:
description:
- Choose the file property against which we compare age.
type: str
choices: [ atime, ctime, mtime ]
default: mtime
hidden:
description:
- Set this to C(yes) to include hidden files, otherwise they will be ignored.
type: bool
default: no
follow:
description:
- Set this to C(yes) to follow symlinks in path for systems with python 2.6+.
type: bool
default: no
get_checksum:
description:
- Set this to C(yes) to retrieve a file's SHA1 checksum.
type: bool
default: no
use_regex:
description:
- If C(no), the patterns are file globs (shell).
- If C(yes), they are python regexes.
type: bool
default: no
depth:
description:
- Set the maximum number of levels to descend into.
- Setting recurse to C(no) will override this value, which is effectively depth 1.
- Default is unlimited depth.
type: int
version_added: "2.6"
seealso:
- module: ansible.windows.win_find
'''
EXAMPLES = r'''
- name: Recursively find /tmp files older than 2 days
find:
paths: /tmp
age: 2d
recurse: yes
- name: Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
find:
paths: /tmp
age: 4w
size: 1m
recurse: yes
- name: Recursively find /var/tmp files with last access time greater than 3600 seconds
find:
paths: /var/tmp
age: 3600
age_stamp: atime
recurse: yes
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz
find:
paths: /var/log
patterns: '*.old,*.log.gz'
size: 10m
# Note that YAML double quotes require escaping backslashes but yaml single quotes do not.
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz via regex
find:
paths: /var/log
patterns: "^.*?\\.(?:old|log\\.gz)$"
size: 10m
use_regex: yes
- name: Find /var/log all directories, exclude nginx and mysql
find:
paths: /var/log
recurse: no
file_type: directory
excludes: 'nginx,mysql'
# When using patterns that contain a comma, make sure they are formatted as lists to avoid splitting the pattern
- name: Use a single pattern that contains a comma formatted as a list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns: ['^_[0-9]{2,4}_.*.log$']
- name: Use multiple patterns that contain a comma formatted as a YAML list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns:
- '^_[0-9]{2,4}_.*.log$'
- '^[a-z]{1,5}_.*log$'
'''
RETURN = r'''
files:
description: All matches found with the specified criteria (see stat module for full output of each dictionary)
returned: success
type: list
sample: [
{ path: "/var/tmp/test1",
mode: "0644",
"...": "...",
checksum: 16fac7be61a6e4591a33ef4b729c5c3302307523
},
{ path: "/var/tmp/test2",
"...": "..."
},
]
matched:
description: Number of matches
returned: success
type: int
sample: 14
examined:
description: Number of filesystem objects looked at
returned: success
type: int
sample: 34
'''
import fnmatch
import grp
import os
import pwd
import re
import stat
import time
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.basic import AnsibleModule
def pfilter(f, patterns=None, excludes=None, use_regex=False):
'''filter using glob patterns'''
if not patterns and not excludes:
return True
if use_regex:
if patterns and not excludes:
for p in patterns:
r = re.compile(p)
if r.match(f):
return True
elif patterns and excludes:
for p in patterns:
r = re.compile(p)
if r.match(f):
for e in excludes:
r = re.compile(e)
if r.match(f):
return False
return True
else:
if patterns and not excludes:
for p in patterns:
if fnmatch.fnmatch(f, p):
return True
elif patterns and excludes:
for p in patterns:
if fnmatch.fnmatch(f, p):
for e in excludes:
if fnmatch.fnmatch(f, e):
return False
return True
return False
def agefilter(st, now, age, timestamp):
'''filter files older than age'''
if age is None:
return True
elif age >= 0 and now - st.__getattribute__("st_%s" % timestamp) >= abs(age):
return True
elif age < 0 and now - st.__getattribute__("st_%s" % timestamp) <= abs(age):
return True
return False
def sizefilter(st, size):
'''filter files greater than size'''
if size is None:
return True
elif size >= 0 and st.st_size >= abs(size):
return True
elif size < 0 and st.st_size <= abs(size):
return True
return False
def contentfilter(fsname, pattern, read_whole_file=False):
"""
Filter files which contain the given expression
:arg fsname: Filename to scan for lines matching a pattern
:arg pattern: Pattern to look for inside of line
:arg read_whole_file: If true, the whole file is read into memory before the regex is applied against it. Otherwise, the regex is applied line-by-line.
:rtype: bool
:returns: True if one of the lines in fsname matches the pattern. Otherwise False
"""
if pattern is None:
return True
prog = re.compile(pattern)
try:
with open(fsname) as f:
if read_whole_file:
return bool(prog.search(f.read()))
for line in f:
if prog.match(line):
return True
except Exception:
pass
return False
def statinfo(st):
pw_name = ""
gr_name = ""
try: # user data
pw_name = pwd.getpwuid(st.st_uid).pw_name
except Exception:
pass
try: # group data
gr_name = grp.getgrgid(st.st_gid).gr_name
except Exception:
pass
return {
'mode': "%04o" % stat.S_IMODE(st.st_mode),
'isdir': stat.S_ISDIR(st.st_mode),
'ischr': stat.S_ISCHR(st.st_mode),
'isblk': stat.S_ISBLK(st.st_mode),
'isreg': stat.S_ISREG(st.st_mode),
'isfifo': stat.S_ISFIFO(st.st_mode),
'islnk': stat.S_ISLNK(st.st_mode),
'issock': stat.S_ISSOCK(st.st_mode),
'uid': st.st_uid,
'gid': st.st_gid,
'size': st.st_size,
'inode': st.st_ino,
'dev': st.st_dev,
'nlink': st.st_nlink,
'atime': st.st_atime,
'mtime': st.st_mtime,
'ctime': st.st_ctime,
'gr_name': gr_name,
'pw_name': pw_name,
'wusr': bool(st.st_mode & stat.S_IWUSR),
'rusr': bool(st.st_mode & stat.S_IRUSR),
'xusr': bool(st.st_mode & stat.S_IXUSR),
'wgrp': bool(st.st_mode & stat.S_IWGRP),
'rgrp': bool(st.st_mode & stat.S_IRGRP),
'xgrp': bool(st.st_mode & stat.S_IXGRP),
'woth': bool(st.st_mode & stat.S_IWOTH),
'roth': bool(st.st_mode & stat.S_IROTH),
'xoth': bool(st.st_mode & stat.S_IXOTH),
'isuid': bool(st.st_mode & stat.S_ISUID),
'isgid': bool(st.st_mode & stat.S_ISGID),
}
def main():
module = AnsibleModule(
argument_spec=dict(
paths=dict(type='list', required=True, aliases=['name', 'path'], elements='str'),
patterns=dict(type='list', default=['*'], aliases=['pattern'], elements='str'),
excludes=dict(type='list', aliases=['exclude'], elements='str'),
contains=dict(type='str'),
read_whole_file=dict(type='bool', default=False),
file_type=dict(type='str', default="file", choices=['any', 'directory', 'file', 'link']),
age=dict(type='str'),
age_stamp=dict(type='str', default="mtime", choices=['atime', 'ctime', 'mtime']),
size=dict(type='str'),
recurse=dict(type='bool', default=False),
hidden=dict(type='bool', default=False),
follow=dict(type='bool', default=False),
get_checksum=dict(type='bool', default=False),
use_regex=dict(type='bool', default=False),
depth=dict(type='int'),
),
supports_check_mode=True,
)
params = module.params
filelist = []
if params['age'] is None:
age = None
else:
# convert age to seconds:
m = re.match(r"^(-?\d+)(s|m|h|d|w)?$", params['age'].lower())
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
if m:
age = int(m.group(1)) * seconds_per_unit.get(m.group(2), 1)
else:
module.fail_json(age=params['age'], msg="failed to process age")
if params['size'] is None:
size = None
else:
# convert size to bytes:
m = re.match(r"^(-?\d+)(b|k|m|g|t)?$", params['size'].lower())
bytes_per_unit = {"b": 1, "k": 1024, "m": 1024**2, "g": 1024**3, "t": 1024**4}
if m:
size = int(m.group(1)) * bytes_per_unit.get(m.group(2), 1)
else:
module.fail_json(size=params['size'], msg="failed to process size")
now = time.time()
msg = ''
looked = 0
for npath in params['paths']:
npath = os.path.expanduser(os.path.expandvars(npath))
try:
if not os.path.isdir(npath):
raise Exception("'%s' is not a directory" % to_native(npath))
for root, dirs, files in os.walk(npath, followlinks=params['follow']):
looked = looked + len(files) + len(dirs)
for fsobj in (files + dirs):
fsname = os.path.normpath(os.path.join(root, fsobj))
if params['depth']:
wpath = npath.rstrip(os.path.sep) + os.path.sep
depth = int(fsname.count(os.path.sep)) - int(wpath.count(os.path.sep)) + 1
if depth > params['depth']:
continue
if os.path.basename(fsname).startswith('.') and not params['hidden']:
continue
try:
st = os.lstat(fsname)
except (IOError, OSError) as e:
msg += "Skipped entry '%s' due to this access issue: %s\n" % (fsname, to_text(e))
continue
r = {'path': fsname}
if params['file_type'] == 'any':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
if stat.S_ISREG(st.st_mode) and params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISDIR(st.st_mode) and params['file_type'] == 'directory':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
elif stat.S_ISREG(st.st_mode) and params['file_type'] == 'file':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and \
agefilter(st, now, age, params['age_stamp']) and \
sizefilter(st, size) and contentfilter(fsname, params['contains'], params['read_whole_file']):
r.update(statinfo(st))
if params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISLNK(st.st_mode) and params['file_type'] == 'link':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
if not params['recurse']:
break
except Exception as e:
warn = "Skipped '%s' path due to this access issue: %s\n" % (npath, to_text(e))
module.warn(warn)
msg += warn
matched = len(filelist)
module.exit_json(files=filelist, changed=False, msg=msg, matched=matched, examined=looked)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,627 |
Module find not respecting depth parameter in 2.10
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible.builtin.find
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.5
config file = /Users/bryan.murdock/.ansible.cfg
configured module search path = ['/Users/bryan.murdock/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bryan.murdock/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /Users/bryan.murdock/Library/Python/3.7/bin/ansible
python version = 3.7.3 (default, Apr 24 2020, 18:51:23) [Clang 11.0.3 (clang-1103.0.32.62)]
```
I tested with version 2.10.4 and the bug is there too. I also tested with 2.9.17 and the bug is *not* present in that version.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/Users/bryan.murdock/.ansible.cfg) = True
DEFAULT_FORKS(/Users/bryan.murdock/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/bryan.murdock/.ansible.cfg) = ['/Users/bryan.murdock/work/soc-common/scripts/ansible-hosts']
DEFAULT_TIMEOUT(/Users/bryan.murdock/.ansible.cfg) = 120
DEPRECATION_WARNINGS(/Users/bryan.murdock/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running ansible-playbook on Mac OS 10.15.7, connecting to centos 7.8.2003
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
If you run the below task in a playbook using ansible 2.10, it takes a really long time. If you use ansible 2.9, it completes very quickly. You can set recurse to no and it will complete very quickly (but won't find the files I need). If you set recures to yes and depth to 1 then it should behave the same as recurse: no, but it doesn't. It takes a really long time.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: find .bashrc files
find:
paths: /home/
patterns: .bashrc
file_type: file
hidden: yes
recurse: yes
depth: 2
register: bashrc
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
find completes quickly
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
find takes a really long time.
<!--- Paste verbatim command output between quotes -->
There is no interesting or helpful output
|
https://github.com/ansible/ansible/issues/73627
|
https://github.com/ansible/ansible/pull/73718
|
950ab74758a6014639236612594118b2b6f4751e
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
| 2021-02-16T18:46:21Z |
python
| 2021-02-25T19:32:49Z |
test/integration/targets/find/tasks/main.yml
|
# Test code for the find module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact: output_dir_test={{output_dir}}/test_find
- name: make sure our testing sub-directory does not exist
file:
path: "{{ output_dir_test }}"
state: absent
- name: create our testing sub-directory
file:
path: "{{ output_dir_test }}"
state: directory
##
## find
##
- name: make some directories
file:
path: "{{ output_dir_test }}/{{ item }}"
state: directory
with_items:
- a/b/c/d
- e/f/g/h
- name: make some files
copy:
dest: "{{ output_dir_test }}/{{ item }}"
content: 'data'
with_items:
- a/1.txt
- a/b/2.jpg
- a/b/c/3
- a/b/c/d/4.xml
- e/5.json
- e/f/6.swp
- e/f/g/7.img
- e/f/g/h/8.ogg
- name: find the directories
find:
paths: "{{ output_dir_test }}"
file_type: directory
recurse: yes
register: find_test0
- debug: var=find_test0
- name: validate directory results
assert:
that:
- 'find_test0.changed is defined'
- 'find_test0.examined is defined'
- 'find_test0.files is defined'
- 'find_test0.matched is defined'
- 'find_test0.msg is defined'
- 'find_test0.matched == 8'
- 'find_test0.files | length == 8'
- name: find the xml and img files
find:
paths: "{{ output_dir_test }}"
file_type: file
patterns: "*.xml,*.img"
recurse: yes
register: find_test1
- debug: var=find_test1
- name: validate directory results
assert:
that:
- 'find_test1.matched == 2'
- 'find_test1.files | length == 2'
- name: find the xml file
find:
paths: "{{ output_dir_test }}"
patterns: "*.xml"
recurse: yes
register: find_test2
- debug: var=find_test2
- name: validate gr_name and pw_name are defined
assert:
that:
- 'find_test2.matched == 1'
- 'find_test2.files[0].pw_name is defined'
- 'find_test2.files[0].gr_name is defined'
- name: find the xml file with empty excludes
find:
paths: "{{ output_dir_test }}"
patterns: "*.xml"
recurse: yes
excludes: []
register: find_test3
- debug: var=find_test3
- name: validate gr_name and pw_name are defined
assert:
that:
- 'find_test3.matched == 1'
- 'find_test3.files[0].pw_name is defined'
- 'find_test3.files[0].gr_name is defined'
- name: Copy some files into the test dir
copy:
src: "{{ item }}"
dest: "{{ output_dir_test }}/{{ item }}"
mode: 0644
with_items:
- a.txt
- log.txt
- name: Ensure '$' only matches the true end of the file with read_whole_file, not a line
find:
paths: "{{ output_dir_test }}"
patterns: "*.txt"
contains: "KO$"
read_whole_file: true
register: whole_no_match
- debug: var=whole_no_match
- assert:
that:
- whole_no_match.matched == 0
- name: Match the end of the file successfully
find:
paths: "{{ output_dir_test }}"
patterns: "*.txt"
contains: "OK$"
read_whole_file: true
register: whole_match
- debug: var=whole_match
- assert:
that:
- whole_match.matched == 1
- name: When read_whole_file=False, $ should match an individual line
find:
paths: "{{ output_dir_test }}"
patterns: "*.txt"
contains: ".*KO$"
read_whole_file: false
register: match_end_of_line
- debug: var=match_end_of_line
- assert:
that:
- match_end_of_line.matched == 1
- name: When read_whole_file=True, match across line boundaries
find:
paths: "{{ output_dir_test }}"
patterns: "*.txt"
contains: "has\na few"
read_whole_file: true
register: match_line_boundaries
- debug: var=match_line_boundaries
- assert:
that:
- match_line_boundaries.matched == 1
- name: When read_whole_file=False, do not match across line boundaries
find:
paths: "{{ output_dir_test }}"
patterns: "*.txt"
contains: "has\na few"
read_whole_file: false
register: no_match_line_boundaries
- debug: var=no_match_line_boundaries
- assert:
that:
- no_match_line_boundaries.matched == 0
- block:
- set_fact:
mypath: /idontexist{{lookup('pipe', 'mktemp')}}
- find:
paths: '{{mypath}}'
patterns: '*'
register: failed_path
- assert:
that:
- failed_path.files == []
- failed_path.msg.startswith("Skipped '{{mypath}}' path due to this access issue")
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
changelogs/fragments/pipelinig_to_plugins.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
type: boolean
yaml: {key: plugins.connection.pipelining}
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
oracle: *rhelish
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/plugins/connection/local.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2015, 2017 Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: local
short_description: execute on controller
description:
- This connection plugin allows ansible to execute tasks on the Ansible 'controller' instead of on a remote host.
author: ansible (@core)
version_added: historical
notes:
- The remote user is ignored, the user with which the ansible CLI was executed is used instead.
'''
import os
import pty
import shutil
import subprocess
import fcntl
import getpass
import ansible.constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import text_type, binary_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
display = Display()
class Connection(ConnectionBase):
''' Local based connections '''
transport = 'local'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.cwd = None
self.default_user = getpass.getuser()
def _connect(self):
''' connect to the local host; nothing to do here '''
# Because we haven't made any remote connection we're running as
# the local user, rather than as whatever is configured in remote_user.
self._play_context.remote_user = self.default_user
if not self._connected:
display.vvv(u"ESTABLISH LOCAL CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
self._connected = True
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the local host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.debug("in local.exec_command()")
executable = C.DEFAULT_EXECUTABLE.split()[0] if C.DEFAULT_EXECUTABLE else None
if not os.path.exists(to_bytes(executable, errors='surrogate_or_strict')):
raise AnsibleError("failed to find the executable specified %s."
" Please verify if the executable exists and re-try." % executable)
display.vvv(u"EXEC {0}".format(to_text(cmd)), host=self._play_context.remote_addr)
display.debug("opening command with Popen()")
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = map(to_bytes, cmd)
master = None
stdin = subprocess.PIPE
if sudoable and self.become and self.become.expect_prompt():
# Create a pty if sudoable for privlege escalation that needs it.
# Falls back to using a standard pipe if this fails, which may
# cause the command to fail in certain situations where we are escalating
# privileges or the command otherwise needs a pty.
try:
master, stdin = pty.openpty()
except (IOError, OSError) as e:
display.debug("Unable to open pty: %s" % to_native(e))
p = subprocess.Popen(
cmd,
shell=isinstance(cmd, (text_type, binary_type)),
executable=executable,
cwd=self.cwd,
stdin=stdin,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# if we created a master, we can close the other half of the pty now
if master is not None:
os.close(stdin)
display.debug("done running command with Popen()")
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
become_output = b''
try:
while not self.become.check_success(become_output) and not self.become.check_password_prompt(become_output):
events = selector.select(self._play_context.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output))
for key, event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if not chunk:
stdout, stderr = p.communicate()
raise AnsibleError('privilege output closed while waiting for password prompt:\n' + to_native(become_output))
become_output += chunk
finally:
selector.close()
if not self.become.check_success(become_output):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
os.write(master, to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
# finally, close the other half of the pty, if it was created
if master:
os.close(master)
display.debug("done with local.exec_command()")
return (p.returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to local '''
super(Connection, self).put_file(in_path, out_path)
in_path = unfrackpath(in_path, basedir=self.cwd)
out_path = unfrackpath(out_path, basedir=self.cwd)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
try:
shutil.copyfile(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except shutil.Error:
raise AnsibleError("failed to copy: {0} and {1} are the same".format(to_native(in_path), to_native(out_path)))
except IOError as e:
raise AnsibleError("failed to transfer file to {0}: {1}".format(to_native(out_path), to_native(e)))
def fetch_file(self, in_path, out_path):
''' fetch a file from local to local -- for compatibility '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
self.put_file(in_path, out_path)
def close(self):
''' terminate the connection; nothing to do here '''
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/plugins/connection/psrp.py
|
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: psrp
short_description: Run tasks over Microsoft PowerShell Remoting Protocol
description:
- Run commands or put/fetch on a target via PSRP (WinRM plugin)
- This is similar to the I(winrm) connection plugin which uses the same
underlying transport but instead runs in a PowerShell interpreter.
version_added: "2.7"
requirements:
- pypsrp>=0.4.0 (Python library)
options:
# transport options
remote_addr:
description:
- The hostname or IP address of the remote host.
default: inventory_hostname
type: str
vars:
- name: ansible_host
- name: ansible_psrp_host
remote_user:
description:
- The user to log in as.
type: str
vars:
- name: ansible_user
- name: ansible_psrp_user
remote_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
type: str
vars:
- name: ansible_password
- name: ansible_winrm_pass
- name: ansible_winrm_password
aliases:
- password # Needed for --ask-pass to come through on delegation
port:
description:
- The port for PSRP to connect on the remote target.
- Default is C(5986) if I(protocol) is not defined or is C(https),
otherwise the port is C(5985).
type: int
vars:
- name: ansible_port
- name: ansible_psrp_port
protocol:
description:
- Set the protocol to use for the connection.
- Default is C(https) if I(port) is not defined or I(port) is not C(5985).
choices:
- http
- https
type: str
vars:
- name: ansible_psrp_protocol
path:
description:
- The URI path to connect to.
type: str
vars:
- name: ansible_psrp_path
default: 'wsman'
auth:
description:
- The authentication protocol to use when authenticating the remote user.
- The default, C(negotiate), will attempt to use C(Kerberos) if it is
available and fall back to C(NTLM) if it isn't.
type: str
vars:
- name: ansible_psrp_auth
choices:
- basic
- certificate
- negotiate
- kerberos
- ntlm
- credssp
default: negotiate
cert_validation:
description:
- Whether to validate the remote server's certificate or not.
- Set to C(ignore) to not validate any certificates.
- I(ca_cert) can be set to the path of a PEM certificate chain to
use in the validation.
choices:
- validate
- ignore
default: validate
type: str
vars:
- name: ansible_psrp_cert_validation
ca_cert:
description:
- The path to a PEM certificate chain to use when validating the server's
certificate.
- This value is ignored if I(cert_validation) is set to C(ignore).
type: path
vars:
- name: ansible_psrp_cert_trust_path
- name: ansible_psrp_ca_cert
aliases: [ cert_trust_path ]
connection_timeout:
description:
- The connection timeout for making the request to the remote host.
- This is measured in seconds.
type: int
vars:
- name: ansible_psrp_connection_timeout
default: 30
read_timeout:
description:
- The read timeout for receiving data from the remote host.
- This value must always be greater than I(operation_timeout).
- This option requires pypsrp >= 0.3.
- This is measured in seconds.
type: int
vars:
- name: ansible_psrp_read_timeout
default: 30
version_added: '2.8'
reconnection_retries:
description:
- The number of retries on connection errors.
type: int
vars:
- name: ansible_psrp_reconnection_retries
default: 0
version_added: '2.8'
reconnection_backoff:
description:
- The backoff time to use in between reconnection attempts.
(First sleeps X, then sleeps 2*X, then sleeps 4*X, ...)
- This is measured in seconds.
- The C(ansible_psrp_reconnection_backoff) variable was added in Ansible
2.9.
type: int
vars:
- name: ansible_psrp_connection_backoff
- name: ansible_psrp_reconnection_backoff
default: 2
version_added: '2.8'
message_encryption:
description:
- Controls the message encryption settings, this is different from TLS
encryption when I(ansible_psrp_protocol) is C(https).
- Only the auth protocols C(negotiate), C(kerberos), C(ntlm), and
C(credssp) can do message encryption. The other authentication protocols
only support encryption when C(protocol) is set to C(https).
- C(auto) means means message encryption is only used when not using
TLS/HTTPS.
- C(always) is the same as C(auto) but message encryption is always used
even when running over TLS/HTTPS.
- C(never) disables any encryption checks that are in place when running
over HTTP and disables any authentication encryption processes.
type: str
vars:
- name: ansible_psrp_message_encryption
choices:
- auto
- always
- never
default: auto
proxy:
description:
- Set the proxy URL to use when connecting to the remote host.
vars:
- name: ansible_psrp_proxy
type: str
ignore_proxy:
description:
- Will disable any environment proxy settings and connect directly to the
remote host.
- This option is ignored if C(proxy) is set.
vars:
- name: ansible_psrp_ignore_proxy
type: bool
default: 'no'
# auth options
certificate_key_pem:
description:
- The local path to an X509 certificate key to use with certificate auth.
type: path
vars:
- name: ansible_psrp_certificate_key_pem
certificate_pem:
description:
- The local path to an X509 certificate to use with certificate auth.
type: path
vars:
- name: ansible_psrp_certificate_pem
credssp_auth_mechanism:
description:
- The sub authentication mechanism to use with CredSSP auth.
- When C(auto), both Kerberos and NTLM is attempted with kerberos being
preferred.
type: str
choices:
- auto
- kerberos
- ntlm
default: auto
vars:
- name: ansible_psrp_credssp_auth_mechanism
credssp_disable_tlsv1_2:
description:
- Disables the use of TLSv1.2 on the CredSSP authentication channel.
- This should not be set to C(yes) unless dealing with a host that does not
have TLSv1.2.
default: no
type: bool
vars:
- name: ansible_psrp_credssp_disable_tlsv1_2
credssp_minimum_version:
description:
- The minimum CredSSP server authentication version that will be accepted.
- Set to C(5) to ensure the server has been patched and is not vulnerable
to CVE 2018-0886.
default: 2
type: int
vars:
- name: ansible_psrp_credssp_minimum_version
negotiate_delegate:
description:
- Allow the remote user the ability to delegate it's credentials to another
server, i.e. credential delegation.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
type: bool
vars:
- name: ansible_psrp_negotiate_delegate
negotiate_hostname_override:
description:
- Override the remote hostname when searching for the host in the Kerberos
lookup.
- This allows Ansible to connect over IP but authenticate with the remote
server using it's DNS name.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
type: str
vars:
- name: ansible_psrp_negotiate_hostname_override
negotiate_send_cbt:
description:
- Send the Channel Binding Token (CBT) structure when authenticating.
- CBT is used to provide extra protection against Man in the Middle C(MitM)
attacks by binding the outer transport channel to the auth channel.
- CBT is not used when using just C(HTTP), only C(HTTPS).
default: yes
type: bool
vars:
- name: ansible_psrp_negotiate_send_cbt
negotiate_service:
description:
- Override the service part of the SPN used during Kerberos authentication.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
default: WSMAN
type: str
vars:
- name: ansible_psrp_negotiate_service
# protocol options
operation_timeout:
description:
- Sets the WSMan timeout for each operation.
- This is measured in seconds.
- This should not exceed the value for C(connection_timeout).
type: int
vars:
- name: ansible_psrp_operation_timeout
default: 20
max_envelope_size:
description:
- Sets the maximum size of each WSMan message sent to the remote host.
- This is measured in bytes.
- Defaults to C(150KiB) for compatibility with older hosts.
type: int
vars:
- name: ansible_psrp_max_envelope_size
default: 153600
configuration_name:
description:
- The name of the PowerShell configuration endpoint to connect to.
type: str
vars:
- name: ansible_psrp_configuration_name
default: Microsoft.PowerShell
"""
import base64
import json
import logging
import os
from ansible import constants as C
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.plugins.shell.powershell import _common_args
from ansible.utils.display import Display
from ansible.utils.hashing import sha1
HAS_PYPSRP = True
PYPSRP_IMP_ERR = None
try:
import pypsrp
from pypsrp.complex_objects import GenericComplexObject, PSInvocationState, RunspacePoolState
from pypsrp.exceptions import AuthenticationError, WinRMError
from pypsrp.host import PSHost, PSHostUserInterface
from pypsrp.powershell import PowerShell, RunspacePool
from pypsrp.shell import Process, SignalCode, WinRS
from pypsrp.wsman import WSMan, AUTH_KWARGS
from requests.exceptions import ConnectionError, ConnectTimeout
except ImportError as err:
HAS_PYPSRP = False
PYPSRP_IMP_ERR = err
NEWER_PYPSRP = True
try:
import pypsrp.pwsh_scripts
except ImportError:
NEWER_PYPSRP = False
display = Display()
class Connection(ConnectionBase):
transport = 'psrp'
module_implementation_preferences = ('.ps1', '.exe', '')
allow_executable = False
has_pipelining = True
allow_extras = True
def __init__(self, *args, **kwargs):
self.always_pipeline_modules = True
self.has_native_async = True
self.runspace = None
self.host = None
self._shell_type = 'powershell'
super(Connection, self).__init__(*args, **kwargs)
if not C.DEFAULT_DEBUG:
logging.getLogger('pypsrp').setLevel(logging.WARNING)
logging.getLogger('requests_credssp').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
def _connect(self):
if not HAS_PYPSRP:
raise AnsibleError("pypsrp or dependencies are not installed: %s"
% to_native(PYPSRP_IMP_ERR))
super(Connection, self)._connect()
self._build_kwargs()
display.vvv("ESTABLISH PSRP CONNECTION FOR USER: %s ON PORT %s TO %s" %
(self._psrp_user, self._psrp_port, self._psrp_host),
host=self._psrp_host)
if not self.runspace:
connection = WSMan(**self._psrp_conn_kwargs)
# create our psuedo host to capture the exit code and host output
host_ui = PSHostUserInterface()
self.host = PSHost(None, None, False, "Ansible PSRP Host", None,
host_ui, None)
self.runspace = RunspacePool(
connection, host=self.host,
configuration_name=self._psrp_configuration_name
)
display.vvvvv(
"PSRP OPEN RUNSPACE: auth=%s configuration=%s endpoint=%s" %
(self._psrp_auth, self._psrp_configuration_name,
connection.transport.endpoint), host=self._psrp_host
)
try:
self.runspace.open()
except AuthenticationError as e:
raise AnsibleConnectionFailure("failed to authenticate with "
"the server: %s" % to_native(e))
except WinRMError as e:
raise AnsibleConnectionFailure(
"psrp connection failure during runspace open: %s"
% to_native(e)
)
except (ConnectionError, ConnectTimeout) as e:
raise AnsibleConnectionFailure(
"Failed to connect to the host via PSRP: %s"
% to_native(e)
)
self._connected = True
return self
def reset(self):
if not self._connected:
return
display.vvvvv("PSRP: Reset Connection", host=self._psrp_host)
self.runspace = None
self._connect()
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data,
sudoable=sudoable)
if cmd.startswith(" ".join(_common_args) + " -EncodedCommand"):
# This is a PowerShell script encoded by the shell plugin, we will
# decode the script and execute it in the runspace instead of
# starting a new interpreter to save on time
b_command = base64.b64decode(cmd.split(" ")[-1])
script = to_text(b_command, 'utf-16-le')
in_data = to_text(in_data, errors="surrogate_or_strict", nonstring="passthru")
if in_data and in_data.startswith(u"#!"):
# ANSIBALLZ wrapper, we need to get the interpreter and execute
# that as the script - note this won't work as basic.py relies
# on packages not available on Windows, once fixed we can enable
# this path
interpreter = to_native(in_data.splitlines()[0][2:])
# script = "$input | &'%s' -" % interpreter
# in_data = to_text(in_data)
raise AnsibleError("cannot run the interpreter '%s' on the psrp "
"connection plugin" % interpreter)
# call build_module_command to get the bootstrap wrapper text
bootstrap_wrapper = self._shell.build_module_command('', '', '')
if bootstrap_wrapper == cmd:
# Do not display to the user each invocation of the bootstrap wrapper
display.vvv("PSRP: EXEC (via pipeline wrapper)")
else:
display.vvv("PSRP: EXEC %s" % script, host=self._psrp_host)
else:
# In other cases we want to execute the cmd as the script. We add on the 'exit $LASTEXITCODE' to ensure the
# rc is propagated back to the connection plugin.
script = to_text(u"%s\nexit $LASTEXITCODE" % cmd)
display.vvv(u"PSRP: EXEC %s" % script, host=self._psrp_host)
rc, stdout, stderr = self._exec_psrp_script(script, in_data)
return rc, stdout, stderr
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
out_path = self._shell._unquote(out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._psrp_host)
# The new method that uses PSRP directly relies on a feature added in pypsrp 0.4.0 (release 2019-09-19). In
# case someone still has an older version present we warn them asking to update their library to a newer
# release and fallback to the old WSMV shell.
if NEWER_PYPSRP:
rc, stdout, stderr, local_sha1 = self._put_file_new(in_path, out_path)
else:
display.deprecated("Older pypsrp library detected, please update to pypsrp>=0.4.0 to use the newer copy "
"method over PSRP.", version="2.13", collection_name='ansible.builtin')
rc, stdout, stderr, local_sha1 = self._put_file_old(in_path, out_path)
if rc != 0:
raise AnsibleError(to_native(stderr))
put_output = json.loads(to_text(stdout))
remote_sha1 = put_output.get("sha1")
if not remote_sha1:
raise AnsibleError("Remote sha1 was not returned, stdout: '%s', stderr: '%s'"
% (to_native(stdout), to_native(stderr)))
if not remote_sha1 == local_sha1:
raise AnsibleError("Remote sha1 hash %s does not match local hash %s"
% (to_native(remote_sha1), to_native(local_sha1)))
def _put_file_old(self, in_path, out_path):
script = u'''begin {
$ErrorActionPreference = "Stop"
$ProgressPreference = 'SilentlyContinue'
$path = '%s'
$fd = [System.IO.File]::Create($path)
$algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @()
} process {
$bytes = [System.Convert]::FromBase64String($input)
$algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null
$fd.Write($bytes, 0, $bytes.Length)
} end {
$fd.Close()
$algo.TransformFinalBlock($bytes, 0, 0) > $null
$hash = [System.BitConverter]::ToString($algo.Hash)
$hash = $hash.Replace("-", "").ToLowerInvariant()
Write-Output -InputObject "{`"sha1`":`"$hash`"}"
}''' % out_path
cmd_parts = self._shell._encode_script(script, as_list=True,
strict_mode=False,
preserve_rc=False)
b_in_path = to_bytes(in_path, errors='surrogate_or_strict')
if not os.path.exists(b_in_path):
raise AnsibleFileNotFound('file or module does not exist: "%s"'
% to_native(in_path))
in_size = os.path.getsize(b_in_path)
buffer_size = int(self.runspace.connection.max_payload_size / 4 * 3)
sha1_hash = sha1()
# copying files is faster when using the raw WinRM shell and not PSRP
# we will create a WinRS shell just for this process
# TODO: speed this up as there is overhead creating a shell for this
with WinRS(self.runspace.connection, codepage=65001) as shell:
process = Process(shell, cmd_parts[0], cmd_parts[1:])
process.begin_invoke()
offset = 0
with open(b_in_path, 'rb') as src_file:
for data in iter((lambda: src_file.read(buffer_size)), b""):
offset += len(data)
display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" %
(in_path, out_path, offset, len(data)),
host=self._psrp_host)
b64_data = base64.b64encode(data) + b"\r\n"
process.send(b64_data, end=(src_file.tell() == in_size))
sha1_hash.update(data)
# the file was empty, return empty buffer
if offset == 0:
process.send(b"", end=True)
process.end_invoke()
process.signal(SignalCode.CTRL_C)
return process.rc, process.stdout, process.stderr, sha1_hash.hexdigest()
def _put_file_new(self, in_path, out_path):
copy_script = '''begin {
$ErrorActionPreference = "Stop"
$WarningPreference = "Continue"
$path = $MyInvocation.UnboundArguments[0]
$fd = [System.IO.File]::Create($path)
$algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @()
$bindingFlags = [System.Reflection.BindingFlags]'NonPublic, Instance'
Function Get-Property {
<#
.SYNOPSIS
Gets the private/internal property specified of the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name
)
$Object.GetType().GetProperty($Name, $bindingFlags).GetValue($Object, $null)
}
Function Set-Property {
<#
.SYNOPSIS
Sets the private/internal property specified on the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name,
[Parameter(Mandatory=$true, Position=2)]
[AllowNull()]
[System.Object]
$Value
)
$Object.GetType().GetProperty($Name, $bindingFlags).SetValue($Object, $Value, $null)
}
Function Get-Field {
<#
.SYNOPSIS
Gets the private/internal field specified of the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name
)
$Object.GetType().GetField($Name, $bindingFlags).GetValue($Object)
}
# MaximumAllowedMemory is required to be set to so we can send input data that exceeds the limit on a PS
# Runspace. We use reflection to access/set this property as it is not accessible publicly. This is not ideal
# but works on all PowerShell versions I've tested with. We originally used WinRS to send the raw bytes to the
# host but this falls flat if someone is using a custom PS configuration name so this is a workaround. This
# isn't required for smaller files so if it fails we ignore the error and hope it wasn't needed.
# https://github.com/PowerShell/PowerShell/blob/c8e72d1e664b1ee04a14f226adf655cced24e5f0/src/System.Management.Automation/engine/serialization.cs#L325
try {
$Host | Get-Property 'ExternalHost' | `
Get-Field '_transportManager' | `
Get-Property 'Fragmentor' | `
Get-Property 'DeserializationContext' | `
Set-Property 'MaximumAllowedMemory' $null
} catch {}
}
process {
$bytes = [System.Convert]::FromBase64String($input)
$algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null
$fd.Write($bytes, 0, $bytes.Length)
}
end {
$fd.Close()
$algo.TransformFinalBlock($bytes, 0, 0) > $null
$hash = [System.BitConverter]::ToString($algo.Hash).Replace('-', '').ToLowerInvariant()
Write-Output -InputObject "{`"sha1`":`"$hash`"}"
}
'''
# Get the buffer size of each fragment to send, subtract 82 for the fragment, message, and other header info
# fields that PSRP adds. Adjust to size of the base64 encoded bytes length.
buffer_size = int((self.runspace.connection.max_payload_size - 82) / 4 * 3)
sha1_hash = sha1()
b_in_path = to_bytes(in_path, errors='surrogate_or_strict')
if not os.path.exists(b_in_path):
raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path))
def read_gen():
offset = 0
with open(b_in_path, 'rb') as src_fd:
for b_data in iter((lambda: src_fd.read(buffer_size)), b""):
data_len = len(b_data)
offset += data_len
sha1_hash.update(b_data)
# PSRP technically supports sending raw bytes but that method requires a larger CLIXML message.
# Sending base64 is still more efficient here.
display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" % (in_path, out_path, offset, data_len),
host=self._psrp_host)
b64_data = base64.b64encode(b_data)
yield [to_text(b64_data)]
if offset == 0: # empty file
yield [""]
rc, stdout, stderr = self._exec_psrp_script(copy_script, read_gen(), arguments=[out_path], force_stop=True)
return rc, stdout, stderr, sha1_hash.hexdigest()
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path),
host=self._psrp_host)
in_path = self._shell._unquote(in_path)
out_path = out_path.replace('\\', '/')
# because we are dealing with base64 data we need to get the max size
# of the bytes that the base64 size would equal
max_b64_size = int(self.runspace.connection.max_payload_size -
(self.runspace.connection.max_payload_size / 4 * 3))
buffer_size = max_b64_size - (max_b64_size % 1024)
# setup the file stream with read only mode
setup_script = '''$ErrorActionPreference = "Stop"
$path = '%s'
if (Test-Path -Path $path -PathType Leaf) {
$fs = New-Object -TypeName System.IO.FileStream -ArgumentList @(
$path,
[System.IO.FileMode]::Open,
[System.IO.FileAccess]::Read,
[System.IO.FileShare]::Read
)
$buffer_size = %d
} elseif (Test-Path -Path $path -PathType Container) {
Write-Output -InputObject "[DIR]"
} else {
Write-Error -Message "$path does not exist"
$host.SetShouldExit(1)
}''' % (self._shell._escape(in_path), buffer_size)
# read the file stream at the offset and return the b64 string
read_script = '''$ErrorActionPreference = "Stop"
$fs.Seek(%d, [System.IO.SeekOrigin]::Begin) > $null
$buffer = New-Object -TypeName byte[] -ArgumentList $buffer_size
$bytes_read = $fs.Read($buffer, 0, $buffer_size)
if ($bytes_read -gt 0) {
$bytes = $buffer[0..($bytes_read - 1)]
Write-Output -InputObject ([System.Convert]::ToBase64String($bytes))
}'''
# need to run the setup script outside of the local scope so the
# file stream stays active between fetch operations
rc, stdout, stderr = self._exec_psrp_script(setup_script,
use_local_scope=False,
force_stop=True)
if rc != 0:
raise AnsibleError("failed to setup file stream for fetch '%s': %s"
% (out_path, to_native(stderr)))
elif stdout.strip() == '[DIR]':
# to be consistent with other connection plugins, we assume the caller has created the target dir
return
b_out_path = to_bytes(out_path, errors='surrogate_or_strict')
# to be consistent with other connection plugins, we assume the caller has created the target dir
offset = 0
with open(b_out_path, 'wb') as out_file:
while True:
display.vvvvv("PSRP FETCH %s to %s (offset=%d" %
(in_path, out_path, offset), host=self._psrp_host)
rc, stdout, stderr = self._exec_psrp_script(read_script % offset, force_stop=True)
if rc != 0:
raise AnsibleError("failed to transfer file to '%s': %s"
% (out_path, to_native(stderr)))
data = base64.b64decode(stdout.strip())
out_file.write(data)
if len(data) < buffer_size:
break
offset += len(data)
rc, stdout, stderr = self._exec_psrp_script("$fs.Close()", force_stop=True)
if rc != 0:
display.warning("failed to close remote file stream of file "
"'%s': %s" % (in_path, to_native(stderr)))
def close(self):
if self.runspace and self.runspace.state == RunspacePoolState.OPENED:
display.vvvvv("PSRP CLOSE RUNSPACE: %s" % (self.runspace.id),
host=self._psrp_host)
self.runspace.close()
self.runspace = None
self._connected = False
def _build_kwargs(self):
self._psrp_host = self.get_option('remote_addr')
self._psrp_user = self.get_option('remote_user')
self._psrp_pass = self.get_option('remote_password')
protocol = self.get_option('protocol')
port = self.get_option('port')
if protocol is None and port is None:
protocol = 'https'
port = 5986
elif protocol is None:
protocol = 'https' if int(port) != 5985 else 'http'
elif port is None:
port = 5986 if protocol == 'https' else 5985
self._psrp_protocol = protocol
self._psrp_port = int(port)
self._psrp_path = self.get_option('path')
self._psrp_auth = self.get_option('auth')
# cert validation can either be a bool or a path to the cert
cert_validation = self.get_option('cert_validation')
cert_trust_path = self.get_option('ca_cert')
if cert_validation == 'ignore':
self._psrp_cert_validation = False
elif cert_trust_path is not None:
self._psrp_cert_validation = cert_trust_path
else:
self._psrp_cert_validation = True
self._psrp_connection_timeout = self.get_option('connection_timeout') # Can be None
self._psrp_read_timeout = self.get_option('read_timeout') # Can be None
self._psrp_message_encryption = self.get_option('message_encryption')
self._psrp_proxy = self.get_option('proxy')
self._psrp_ignore_proxy = boolean(self.get_option('ignore_proxy'))
self._psrp_operation_timeout = int(self.get_option('operation_timeout'))
self._psrp_max_envelope_size = int(self.get_option('max_envelope_size'))
self._psrp_configuration_name = self.get_option('configuration_name')
self._psrp_reconnection_retries = int(self.get_option('reconnection_retries'))
self._psrp_reconnection_backoff = float(self.get_option('reconnection_backoff'))
self._psrp_certificate_key_pem = self.get_option('certificate_key_pem')
self._psrp_certificate_pem = self.get_option('certificate_pem')
self._psrp_credssp_auth_mechanism = self.get_option('credssp_auth_mechanism')
self._psrp_credssp_disable_tlsv1_2 = self.get_option('credssp_disable_tlsv1_2')
self._psrp_credssp_minimum_version = self.get_option('credssp_minimum_version')
self._psrp_negotiate_send_cbt = self.get_option('negotiate_send_cbt')
self._psrp_negotiate_delegate = self.get_option('negotiate_delegate')
self._psrp_negotiate_hostname_override = self.get_option('negotiate_hostname_override')
self._psrp_negotiate_service = self.get_option('negotiate_service')
supported_args = []
for auth_kwarg in AUTH_KWARGS.values():
supported_args.extend(auth_kwarg)
extra_args = set([v.replace('ansible_psrp_', '') for v in
self.get_option('_extras')])
unsupported_args = extra_args.difference(supported_args)
for arg in unsupported_args:
display.warning("ansible_psrp_%s is unsupported by the current "
"psrp version installed" % arg)
self._psrp_conn_kwargs = dict(
server=self._psrp_host, port=self._psrp_port,
username=self._psrp_user, password=self._psrp_pass,
ssl=self._psrp_protocol == 'https', path=self._psrp_path,
auth=self._psrp_auth, cert_validation=self._psrp_cert_validation,
connection_timeout=self._psrp_connection_timeout,
encryption=self._psrp_message_encryption, proxy=self._psrp_proxy,
no_proxy=self._psrp_ignore_proxy,
max_envelope_size=self._psrp_max_envelope_size,
operation_timeout=self._psrp_operation_timeout,
certificate_key_pem=self._psrp_certificate_key_pem,
certificate_pem=self._psrp_certificate_pem,
credssp_auth_mechanism=self._psrp_credssp_auth_mechanism,
credssp_disable_tlsv1_2=self._psrp_credssp_disable_tlsv1_2,
credssp_minimum_version=self._psrp_credssp_minimum_version,
negotiate_send_cbt=self._psrp_negotiate_send_cbt,
negotiate_delegate=self._psrp_negotiate_delegate,
negotiate_hostname_override=self._psrp_negotiate_hostname_override,
negotiate_service=self._psrp_negotiate_service,
)
# Check if PSRP version supports newer read_timeout argument (needs pypsrp 0.3.0+)
if hasattr(pypsrp, 'FEATURES') and 'wsman_read_timeout' in pypsrp.FEATURES:
self._psrp_conn_kwargs['read_timeout'] = self._psrp_read_timeout
elif self._psrp_read_timeout is not None:
display.warning("ansible_psrp_read_timeout is unsupported by the current psrp version installed, "
"using ansible_psrp_connection_timeout value for read_timeout instead.")
# Check if PSRP version supports newer reconnection_retries argument (needs pypsrp 0.3.0+)
if hasattr(pypsrp, 'FEATURES') and 'wsman_reconnections' in pypsrp.FEATURES:
self._psrp_conn_kwargs['reconnection_retries'] = self._psrp_reconnection_retries
self._psrp_conn_kwargs['reconnection_backoff'] = self._psrp_reconnection_backoff
else:
if self._psrp_reconnection_retries is not None:
display.warning("ansible_psrp_reconnection_retries is unsupported by the current psrp version installed.")
if self._psrp_reconnection_backoff is not None:
display.warning("ansible_psrp_reconnection_backoff is unsupported by the current psrp version installed.")
# add in the extra args that were set
for arg in extra_args.intersection(supported_args):
option = self.get_option('_extras')['ansible_psrp_%s' % arg]
self._psrp_conn_kwargs[arg] = option
def _exec_psrp_script(self, script, input_data=None, use_local_scope=True, force_stop=False, arguments=None):
ps = PowerShell(self.runspace)
ps.add_script(script, use_local_scope=use_local_scope)
if arguments:
for arg in arguments:
ps.add_argument(arg)
ps.invoke(input=input_data)
rc, stdout, stderr = self._parse_pipeline_result(ps)
if force_stop:
# This is usually not needed because we close the Runspace after our exec and we skip the call to close the
# pipeline manually to save on some time. Set to True when running multiple exec calls in the same runspace.
# Current pypsrp versions raise an exception if the current state was not RUNNING. We manually set it so we
# can call stop without any issues.
ps.state = PSInvocationState.RUNNING
ps.stop()
return rc, stdout, stderr
def _parse_pipeline_result(self, pipeline):
"""
PSRP doesn't have the same concept as other protocols with its output.
We need some extra logic to convert the pipeline streams and host
output into the format that Ansible understands.
:param pipeline: The finished PowerShell pipeline that invoked our
commands
:return: rc, stdout, stderr based on the pipeline output
"""
# we try and get the rc from our host implementation, this is set if
# exit or $host.SetShouldExit() is called in our pipeline, if not we
# set to 0 if the pipeline had not errors and 1 if it did
rc = self.host.rc or (1 if pipeline.had_errors else 0)
# TODO: figure out a better way of merging this with the host output
stdout_list = []
for output in pipeline.output:
# Not all pipeline outputs are a string or contain a __str__ value,
# we will create our own output based on the properties of the
# complex object if that is the case.
if isinstance(output, GenericComplexObject) and output.to_string is None:
obj_lines = output.property_sets
for key, value in output.adapted_properties.items():
obj_lines.append(u"%s: %s" % (key, value))
for key, value in output.extended_properties.items():
obj_lines.append(u"%s: %s" % (key, value))
output_msg = u"\n".join(obj_lines)
else:
output_msg = to_text(output, nonstring='simplerepr')
stdout_list.append(output_msg)
if len(self.host.ui.stdout) > 0:
stdout_list += self.host.ui.stdout
stdout = u"\r\n".join(stdout_list)
stderr_list = []
for error in pipeline.streams.error:
# the error record is not as fully fleshed out like we usually get
# in PS, we will manually create it here
command_name = "%s : " % error.command_name if error.command_name else ''
position = "%s\r\n" % error.invocation_position_message if error.invocation_position_message else ''
error_msg = "%s%s\r\n%s" \
" + CategoryInfo : %s\r\n" \
" + FullyQualifiedErrorId : %s" \
% (command_name, str(error), position,
error.message, error.fq_error)
stacktrace = error.script_stacktrace
if self._play_context.verbosity >= 3 and stacktrace is not None:
error_msg += "\r\nStackTrace:\r\n%s" % stacktrace
stderr_list.append(error_msg)
if len(self.host.ui.stderr) > 0:
stderr_list += self.host.ui.stderr
stderr = u"\r\n".join([to_text(o) for o in stderr_list])
display.vvvvv("PSRP RC: %d" % rc, host=self._psrp_host)
display.vvvvv("PSRP STDOUT: %s" % stdout, host=self._psrp_host)
display.vvvvv("PSRP STDERR: %s" % stderr, host=self._psrp_host)
# reset the host back output back to defaults, needed if running
# multiple pipelines on the same RunspacePool
self.host.rc = 0
self.host.ui.stdout = []
self.host.ui.stderr = []
return rc, to_bytes(stdout, encoding='utf-8'), to_bytes(stderr, encoding='utf-8')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
version_added: historical
options:
host:
description: Hostname/ip to connect to.
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_ssh_host
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description: Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
retries:
# constant: ANSIBLE_SSH_RETRIES
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
pipelining:
default: ANSIBLE_PIPELINING
description:
- Pipelining reduces the number of SSH operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- However this conflicts with privilege escalation (become).
For example, when using sudo operations you must first disable 'requiretty' in the sudoers file for the target hosts,
which is why this feature is disabled by default.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: defaults
key: pipelining
- section: ssh_connection
key: pipelining
type: boolean
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(C.ANSIBLE_SSH_RETRIES) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = C.ANSIBLE_SSH_CONTROL_PATH
self.control_path_dir = C.ANSIBLE_SSH_CONTROL_PATH_DIR
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
if conn_password:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
#
# Next, we add [ssh_connection]ssh_args from ansible.cfg.
#
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments controlled by configuration file settings
# (e.g. host_key_checking) or inventory variables (ansible_ssh_port) or
# a combination thereof.
if not C.HOST_KEY_CHECKING:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
if self._play_context.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self._play_context.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self._play_context.private_key_file
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
user = self._play_context.remote_user
if user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self._play_context.remote_user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(self._play_context.timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = getattr(self._play_context, opt, None)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"PlayContext set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self._play_context.timeout
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# filehandles is low, then switch to more efficient ones when higher.
# select is faster when filehandles is low.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if C.HOST_KEY_CHECKING:
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self._play_context.ssh_transfer_method
if ssh_transfer_method is not None:
if not (ssh_transfer_method in ('smart', 'sftp', 'scp', 'piped')):
raise AnsibleOptionsError('transfer_method needs to be one of [smart|sftp|scp|piped]')
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = C.DEFAULT_SCP_IF_SSH
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable') or self._play_context.ssh_executable
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
cmd = self._build_command(self.get_option('ssh_executable') or self._play_context.ssh_executable, 'ssh', '-O', 'stop', self.host)
controlpersist, controlpath = self._persistence_controls(cmd)
cp_arg = [a for a in cmd if a.startswith(b"ControlPath=")]
# only run the reset if the ControlPath already exists or if it isn't
# configured and ControlPersist is set
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
if run_reset:
display.vvv(u'sending stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/plugins/connection/winrm.py
|
# (c) 2014, Chris Church <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: winrm
short_description: Run tasks over Microsoft's WinRM
description:
- Run commands or put/fetch on a target via WinRM
- This plugin allows extra arguments to be passed that are supported by the protocol but not explicitly defined here.
They should take the form of variables declared with the following pattern `ansible_winrm_<option>`.
version_added: "2.0"
requirements:
- pywinrm (python library)
options:
# figure out more elegant 'delegation'
remote_addr:
description:
- Address of the windows machine
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_winrm_host
type: str
remote_user:
description:
- The user to log in as to the Windows machine
vars:
- name: ansible_user
- name: ansible_winrm_user
type: str
remote_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_winrm_pass
- name: ansible_winrm_password
type: str
aliases:
- password # Needed for --ask-pass to come through on delegation
port:
description:
- port for winrm to connect on remote target
- The default is the https (5986) port, if using http it should be 5985
vars:
- name: ansible_port
- name: ansible_winrm_port
default: 5986
type: integer
scheme:
description:
- URI scheme to use
- If not set, then will default to C(https) or C(http) if I(port) is
C(5985).
choices: [http, https]
vars:
- name: ansible_winrm_scheme
type: str
path:
description: URI path to connect to
default: '/wsman'
vars:
- name: ansible_winrm_path
type: str
transport:
description:
- List of winrm transports to attempt to use (ssl, plaintext, kerberos, etc)
- If None (the default) the plugin will try to automatically guess the correct list
- The choices available depend on your version of pywinrm
type: list
vars:
- name: ansible_winrm_transport
kerberos_command:
description: kerberos command to use to request a authentication ticket
default: kinit
vars:
- name: ansible_winrm_kinit_cmd
type: str
kinit_args:
description:
- Extra arguments to pass to C(kinit) when getting the Kerberos authentication ticket.
- By default no extra arguments are passed into C(kinit) unless I(ansible_winrm_kerberos_delegation) is also
set. In that case C(-f) is added to the C(kinit) args so a forwardable ticket is retrieved.
- If set, the args will overwrite any existing defaults for C(kinit), including C(-f) for a delegated ticket.
type: str
vars:
- name: ansible_winrm_kinit_args
version_added: '2.11'
kerberos_mode:
description:
- kerberos usage mode.
- The managed option means Ansible will obtain kerberos ticket.
- While the manual one means a ticket must already have been obtained by the user.
- If having issues with Ansible freezing when trying to obtain the
Kerberos ticket, you can either set this to C(manual) and obtain
it outside Ansible or install C(pexpect) through pip and try
again.
choices: [managed, manual]
vars:
- name: ansible_winrm_kinit_mode
type: str
connection_timeout:
description:
- Sets the operation and read timeout settings for the WinRM
connection.
- Corresponds to the C(operation_timeout_sec) and
C(read_timeout_sec) args in pywinrm so avoid setting these vars
with this one.
- The default value is whatever is set in the installed version of
pywinrm.
vars:
- name: ansible_winrm_connection_timeout
type: int
"""
import base64
import logging
import os
import re
import traceback
import json
import tempfile
import shlex
import subprocess
HAVE_KERBEROS = False
try:
import kerberos
HAVE_KERBEROS = True
except ImportError:
pass
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six.moves.urllib.parse import urlunsplit
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import binary_type, PY3
from ansible.plugins.connection import ConnectionBase
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.hashing import secure_hash
from ansible.utils.display import Display
# getargspec is deprecated in favour of getfullargspec in Python 3 but
# getfullargspec is not available in Python 2
if PY3:
from inspect import getfullargspec as getargspec
else:
from inspect import getargspec
try:
import winrm
from winrm import Response
from winrm.protocol import Protocol
import requests.exceptions
HAS_WINRM = True
except ImportError as e:
HAS_WINRM = False
WINRM_IMPORT_ERR = e
try:
import xmltodict
HAS_XMLTODICT = True
except ImportError as e:
HAS_XMLTODICT = False
XMLTODICT_IMPORT_ERR = e
HAS_PEXPECT = False
try:
import pexpect
# echo was added in pexpect 3.3+ which is newer than the RHEL package
# we can only use pexpect for kerb auth if echo is a valid kwarg
# https://github.com/ansible/ansible/issues/43462
if hasattr(pexpect, 'spawn'):
argspec = getargspec(pexpect.spawn.__init__)
if 'echo' in argspec.args:
HAS_PEXPECT = True
except ImportError as e:
pass
# used to try and parse the hostname and detect if IPv6 is being used
try:
import ipaddress
HAS_IPADDRESS = True
except ImportError:
HAS_IPADDRESS = False
display = Display()
class Connection(ConnectionBase):
'''WinRM connections over HTTP/HTTPS.'''
transport = 'winrm'
module_implementation_preferences = ('.ps1', '.exe', '')
allow_executable = False
has_pipelining = True
allow_extras = True
def __init__(self, *args, **kwargs):
self.always_pipeline_modules = True
self.has_native_async = True
self.protocol = None
self.shell_id = None
self.delegate = None
self._shell_type = 'powershell'
super(Connection, self).__init__(*args, **kwargs)
if not C.DEFAULT_DEBUG:
logging.getLogger('requests_credssp').setLevel(logging.INFO)
logging.getLogger('requests_kerberos').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
def _build_winrm_kwargs(self):
# this used to be in set_options, as win_reboot needs to be able to
# override the conn timeout, we need to be able to build the args
# after setting individual options. This is called by _connect before
# starting the WinRM connection
self._winrm_host = self.get_option('remote_addr')
self._winrm_user = self.get_option('remote_user')
self._winrm_pass = self.get_option('remote_password')
self._winrm_port = self.get_option('port')
self._winrm_scheme = self.get_option('scheme')
# old behaviour, scheme should default to http if not set and the port
# is 5985 otherwise https
if self._winrm_scheme is None:
self._winrm_scheme = 'http' if self._winrm_port == 5985 else 'https'
self._winrm_path = self.get_option('path')
self._kinit_cmd = self.get_option('kerberos_command')
self._winrm_transport = self.get_option('transport')
self._winrm_connection_timeout = self.get_option('connection_timeout')
if hasattr(winrm, 'FEATURE_SUPPORTED_AUTHTYPES'):
self._winrm_supported_authtypes = set(winrm.FEATURE_SUPPORTED_AUTHTYPES)
else:
# for legacy versions of pywinrm, use the values we know are supported
self._winrm_supported_authtypes = set(['plaintext', 'ssl', 'kerberos'])
# calculate transport if needed
if self._winrm_transport is None or self._winrm_transport[0] is None:
# TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic
transport_selector = ['ssl'] if self._winrm_scheme == 'https' else ['plaintext']
if HAVE_KERBEROS and ((self._winrm_user and '@' in self._winrm_user)):
self._winrm_transport = ['kerberos'] + transport_selector
else:
self._winrm_transport = transport_selector
unsupported_transports = set(self._winrm_transport).difference(self._winrm_supported_authtypes)
if unsupported_transports:
raise AnsibleError('The installed version of WinRM does not support transport(s) %s' %
to_native(list(unsupported_transports), nonstring='simplerepr'))
# if kerberos is among our transports and there's a password specified, we're managing the tickets
kinit_mode = self.get_option('kerberos_mode')
if kinit_mode is None:
# HACK: ideally, remove multi-transport stuff
self._kerb_managed = "kerberos" in self._winrm_transport and (self._winrm_pass is not None and self._winrm_pass != "")
elif kinit_mode == "managed":
self._kerb_managed = True
elif kinit_mode == "manual":
self._kerb_managed = False
# arg names we're going passing directly
internal_kwarg_mask = set(['self', 'endpoint', 'transport', 'username', 'password', 'scheme', 'path', 'kinit_mode', 'kinit_cmd'])
self._winrm_kwargs = dict(username=self._winrm_user, password=self._winrm_pass)
argspec = getargspec(Protocol.__init__)
supported_winrm_args = set(argspec.args)
supported_winrm_args.update(internal_kwarg_mask)
passed_winrm_args = set([v.replace('ansible_winrm_', '') for v in self.get_option('_extras')])
unsupported_args = passed_winrm_args.difference(supported_winrm_args)
# warn for kwargs unsupported by the installed version of pywinrm
for arg in unsupported_args:
display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg))
# pass through matching extras, excluding the list we want to treat specially
for arg in passed_winrm_args.difference(internal_kwarg_mask).intersection(supported_winrm_args):
self._winrm_kwargs[arg] = self.get_option('_extras')['ansible_winrm_%s' % arg]
# Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection
# auth itself with a private CCACHE.
def _kerb_auth(self, principal, password):
if password is None:
password = ""
self._kerb_ccache = tempfile.NamedTemporaryFile()
display.vvvvv("creating Kerberos CC at %s" % self._kerb_ccache.name)
krb5ccname = "FILE:%s" % self._kerb_ccache.name
os.environ["KRB5CCNAME"] = krb5ccname
krb5env = dict(KRB5CCNAME=krb5ccname)
# Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR
# '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation).
kinit_cmdline = [self._kinit_cmd]
kinit_args = self.get_option('kinit_args')
if kinit_args:
kinit_args = [to_text(a) for a in shlex.split(kinit_args) if a.strip()]
kinit_cmdline.extend(kinit_args)
elif boolean(self.get_option('_extras').get('ansible_winrm_kerberos_delegation', False)):
kinit_cmdline.append('-f')
kinit_cmdline.append(principal)
# pexpect runs the process in its own pty so it can correctly send
# the password as input even on MacOS which blocks subprocess from
# doing so. Unfortunately it is not available on the built in Python
# so we can only use it if someone has installed it
if HAS_PEXPECT:
proc_mechanism = "pexpect"
command = kinit_cmdline.pop(0)
password = to_text(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with pexpect for principal %s"
% principal)
try:
child = pexpect.spawn(command, kinit_cmdline, timeout=60,
env=krb5env, echo=False)
except pexpect.ExceptionPexpect as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (command, to_native(err))
raise AnsibleConnectionFailure(err_msg)
try:
child.expect(".*:")
child.sendline(password)
except OSError as err:
# child exited before the pass was sent, Ansible will raise
# error based on the rc below, just display the error here
display.vvvv("kinit with pexpect raised OSError: %s"
% to_native(err))
# technically this is the stdout + stderr but to match the
# subprocess error checking behaviour, we will call it stderr
stderr = child.read()
child.wait()
rc = child.exitstatus
else:
proc_mechanism = "subprocess"
password = to_bytes(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with subprocess for principal %s"
% principal)
try:
p = subprocess.Popen(kinit_cmdline, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=krb5env)
except OSError as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (self._kinit_cmd, to_native(err))
raise AnsibleConnectionFailure(err_msg)
stdout, stderr = p.communicate(password + b'\n')
rc = p.returncode != 0
if rc != 0:
# one last attempt at making sure the password does not exist
# in the output
exp_msg = to_native(stderr.strip())
exp_msg = exp_msg.replace(to_native(password), "<redacted>")
err_msg = "Kerberos auth failure for principal %s with %s: %s" \
% (principal, proc_mechanism, exp_msg)
raise AnsibleConnectionFailure(err_msg)
display.vvvvv("kinit succeeded for principal %s" % principal)
def _winrm_connect(self):
'''
Establish a WinRM connection over HTTP/HTTPS.
'''
display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" %
(self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host)
winrm_host = self._winrm_host
if HAS_IPADDRESS:
display.debug("checking if winrm_host %s is an IPv6 address" % winrm_host)
try:
ipaddress.IPv6Address(winrm_host)
except ipaddress.AddressValueError:
pass
else:
winrm_host = "[%s]" % winrm_host
netloc = '%s:%d' % (winrm_host, self._winrm_port)
endpoint = urlunsplit((self._winrm_scheme, netloc, self._winrm_path, '', ''))
errors = []
for transport in self._winrm_transport:
if transport == 'kerberos':
if not HAVE_KERBEROS:
errors.append('kerberos: the python kerberos library is not installed')
continue
if self._kerb_managed:
self._kerb_auth(self._winrm_user, self._winrm_pass)
display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host)
try:
winrm_kwargs = self._winrm_kwargs.copy()
if self._winrm_connection_timeout:
winrm_kwargs['operation_timeout_sec'] = self._winrm_connection_timeout
winrm_kwargs['read_timeout_sec'] = self._winrm_connection_timeout + 1
protocol = Protocol(endpoint, transport=transport, **winrm_kwargs)
# open the shell from connect so we know we're able to talk to the server
if not self.shell_id:
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host)
return protocol
except Exception as e:
err_msg = to_text(e).strip()
if re.search(to_text(r'Operation\s+?timed\s+?out'), err_msg, re.I):
raise AnsibleError('the connection attempt timed out')
m = re.search(to_text(r'Code\s+?(\d{3})'), err_msg)
if m:
code = int(m.groups()[0])
if code == 401:
err_msg = 'the specified credentials were rejected by the server'
elif code == 411:
return protocol
errors.append(u'%s: %s' % (transport, err_msg))
display.vvvvv(u'WINRM CONNECTION ERROR: %s\n%s' % (err_msg, to_text(traceback.format_exc())), host=self._winrm_host)
if errors:
raise AnsibleConnectionFailure(', '.join(map(to_native, errors)))
else:
raise AnsibleError('No transport found for WinRM connection')
def _winrm_send_input(self, protocol, shell_id, command_id, stdin, eof=False):
rq = {'env:Envelope': protocol._get_soap_header(
resource_uri='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd',
action='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Send',
shell_id=shell_id)}
stream = rq['env:Envelope'].setdefault('env:Body', {}).setdefault('rsp:Send', {})\
.setdefault('rsp:Stream', {})
stream['@Name'] = 'stdin'
stream['@CommandId'] = command_id
stream['#text'] = base64.b64encode(to_bytes(stdin))
if eof:
stream['@End'] = 'true'
protocol.send_message(xmltodict.unparse(rq))
def _winrm_exec(self, command, args=(), from_exec=False, stdin_iterator=None):
if not self.protocol:
self.protocol = self._winrm_connect()
self._connected = True
if from_exec:
display.vvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
else:
display.vvvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
command_id = None
try:
stdin_push_failed = False
command_id = self.protocol.run_command(self.shell_id, to_bytes(command), map(to_bytes, args), console_mode_stdin=(stdin_iterator is None))
try:
if stdin_iterator:
for (data, is_last) in stdin_iterator:
self._winrm_send_input(self.protocol, self.shell_id, command_id, data, eof=is_last)
except Exception as ex:
display.warning("ERROR DURING WINRM SEND INPUT - attempting to recover: %s %s"
% (type(ex).__name__, to_text(ex)))
display.debug(traceback.format_exc())
stdin_push_failed = True
# NB: this can hang if the receiver is still running (eg, network failed a Send request but the server's still happy).
# FUTURE: Consider adding pywinrm status check/abort operations to see if the target is still running after a failure.
resptuple = self.protocol.get_command_output(self.shell_id, command_id)
# ensure stdout/stderr are text for py3
# FUTURE: this should probably be done internally by pywinrm
response = Response(tuple(to_text(v) if isinstance(v, binary_type) else v for v in resptuple))
# TODO: check result from response and set stdin_push_failed if we have nonzero
if from_exec:
display.vvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
else:
display.vvvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
display.vvvvvv('WINRM STDOUT %s' % to_text(response.std_out), host=self._winrm_host)
display.vvvvvv('WINRM STDERR %s' % to_text(response.std_err), host=self._winrm_host)
if stdin_push_failed:
# There are cases where the stdin input failed but the WinRM service still processed it. We attempt to
# see if stdout contains a valid json return value so we can ignore this error
try:
filtered_output, dummy = _filter_non_json_lines(response.std_out)
json.loads(filtered_output)
except ValueError:
# stdout does not contain a return response, stdin input was a fatal error
stderr = to_bytes(response.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm send_input failed; \nstdout: %s\nstderr %s'
% (to_native(response.std_out), to_native(stderr)))
return response
except requests.exceptions.Timeout as exc:
raise AnsibleConnectionFailure('winrm connection error: %s' % to_native(exc))
finally:
if command_id:
self.protocol.cleanup_command(self.shell_id, command_id)
def _connect(self):
if not HAS_WINRM:
raise AnsibleError("winrm or requests is not installed: %s" % to_native(WINRM_IMPORT_ERR))
elif not HAS_XMLTODICT:
raise AnsibleError("xmltodict is not installed: %s" % to_native(XMLTODICT_IMPORT_ERR))
super(Connection, self)._connect()
if not self.protocol:
self._build_winrm_kwargs() # build the kwargs from the options set
self.protocol = self._winrm_connect()
self._connected = True
return self
def reset(self):
if not self._connected:
return
self.protocol = None
self.shell_id = None
self._connect()
def _wrapper_payload_stream(self, payload, buffer_size=200000):
payload_bytes = to_bytes(payload)
byte_count = len(payload_bytes)
for i in range(0, byte_count, buffer_size):
yield payload_bytes[i:i + buffer_size], i + buffer_size >= byte_count
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
cmd_parts = self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)
# TODO: display something meaningful here
display.vvv("EXEC (via pipeline wrapper)")
stdin_iterator = None
if in_data:
stdin_iterator = self._wrapper_payload_stream(in_data)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True, stdin_iterator=stdin_iterator)
result.std_out = to_bytes(result.std_out)
result.std_err = to_bytes(result.std_err)
# parse just stderr from CLIXML output
if result.std_err.startswith(b"#< CLIXML"):
try:
result.std_err = _parse_clixml(result.std_err)
except Exception:
# unsure if we're guaranteed a valid xml doc- use raw output in case of error
pass
return (result.status_code, result.std_out, result.std_err)
# FUTURE: determine buffer size at runtime via remote winrm config?
def _put_file_stdin_iterator(self, in_path, out_path, buffer_size=250000):
in_size = os.path.getsize(to_bytes(in_path, errors='surrogate_or_strict'))
offset = 0
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
for out_data in iter((lambda: in_file.read(buffer_size)), b''):
offset += len(out_data)
self._display.vvvvv('WINRM PUT "%s" to "%s" (offset=%d size=%d)' % (in_path, out_path, offset, len(out_data)), host=self._winrm_host)
# yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded
b64_data = base64.b64encode(out_data) + b'\r\n'
# cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal
yield b64_data, (in_file.tell() == in_size)
if offset == 0: # empty file, return an empty buffer + eof to close it
yield "", True
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
out_path = self._shell._unquote(out_path)
display.vvv('PUT "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path))
script_template = u'''
begin {{
$path = '{0}'
$DebugPreference = "Continue"
$ErrorActionPreference = "Stop"
Set-StrictMode -Version 2
$fd = [System.IO.File]::Create($path)
$sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @() #initialize for empty file case
}}
process {{
$bytes = [System.Convert]::FromBase64String($input)
$sha1.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) | Out-Null
$fd.Write($bytes, 0, $bytes.Length)
}}
end {{
$sha1.TransformFinalBlock($bytes, 0, 0) | Out-Null
$hash = [System.BitConverter]::ToString($sha1.Hash).Replace("-", "").ToLowerInvariant()
$fd.Close()
Write-Output "{{""sha1"":""$hash""}}"
}}
'''
script = script_template.format(self._shell._escape(out_path))
cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], stdin_iterator=self._put_file_stdin_iterator(in_path, out_path))
# TODO: improve error handling
if result.status_code != 0:
raise AnsibleError(to_native(result.std_err))
try:
put_output = json.loads(result.std_out)
except ValueError:
# stdout does not contain a valid response
stderr = to_bytes(result.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm put_file failed; \nstdout: %s\nstderr %s' % (to_native(result.std_out), to_native(stderr)))
remote_sha1 = put_output.get("sha1")
if not remote_sha1:
raise AnsibleError("Remote sha1 was not returned")
local_sha1 = secure_hash(in_path)
if not remote_sha1 == local_sha1:
raise AnsibleError("Remote sha1 hash {0} does not match local hash {1}".format(to_native(remote_sha1), to_native(local_sha1)))
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
in_path = self._shell._unquote(in_path)
out_path = out_path.replace('\\', '/')
# consistent with other connection plugins, we assume the caller has created the target dir
display.vvv('FETCH "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
buffer_size = 2**19 # 0.5MB chunks
out_file = None
try:
offset = 0
while True:
try:
script = '''
$path = '%(path)s'
If (Test-Path -Path $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite)
$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null
$buffer = New-Object -TypeName byte[] $buffer_size
$bytes_read = $stream.Read($buffer, 0, $buffer_size)
if ($bytes_read -gt 0) {
$bytes = $buffer[0..($bytes_read - 1)]
[System.Convert]::ToBase64String($bytes)
}
$stream.Close() > $null
}
ElseIf (Test-Path -Path $path -PathType Container)
{
Write-Host "[DIR]";
}
Else
{
Write-Error "$path does not exist";
Exit 1;
}
''' % dict(buffer_size=buffer_size, path=self._shell._escape(in_path), offset=offset)
display.vvvvv('WINRM FETCH "%s" to "%s" (offset=%d)' % (in_path, out_path, offset), host=self._winrm_host)
cmd_parts = self._shell._encode_script(script, as_list=True, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:])
if result.status_code != 0:
raise IOError(to_native(result.std_err))
if result.std_out.strip() == '[DIR]':
data = None
else:
data = base64.b64decode(result.std_out.strip())
if data is None:
break
else:
if not out_file:
# If out_path is a directory and we're expecting a file, bail out now.
if os.path.isdir(to_bytes(out_path, errors='surrogate_or_strict')):
break
out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb')
out_file.write(data)
if len(data) < buffer_size:
break
offset += len(data)
except Exception:
traceback.print_exc()
raise AnsibleError('failed to transfer file to "%s"' % to_native(out_path))
finally:
if out_file:
out_file.close()
def close(self):
if self.protocol and self.shell_id:
display.vvvvv('WINRM CLOSE SHELL: %s' % self.shell_id, host=self._winrm_host)
self.protocol.close_shell(self.shell_id)
self.shell_id = None
self.protocol = None
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,672 |
Running commands on localhost hangs with sudo and pipelining since 2.10.6
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
After upgrading ansible-base from 2.10.5 to 2.10.6, I can no longer run sudo commands on localhost. I noticed that disabling SSH pipelinging allows sudo commands to run again. The issue also affects latest git version.
git bisect points to 3ef061bdc4610bbf213f70bc70976fdc3005e2cc, which is from #73281 (2.10), #73023 (devel)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
connection
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ~/.local/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ~/.local/bin/ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ANSIBLE_PIPELINING(/home/yen/tmp/ansible-bug/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(env: ANSIBLE_SSH_CONTROL_PATH_DIR) = /run/user/1000/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Controller & target: Arch Linux, sudo 1.9.5.p2
Networking: should be irrelevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
$ cat ansible.cfg
[ssh_connection]
pipelining = True
$ cat inventory
localhost ansible_connection=local
$ ~/.local/bin/ansible -i inventory localhost -m ping --become --ask-become-pass -vvvvvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I got a pong response
##### ACTUAL RESULTS
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.11.0.dev0 (devel 5078a0baa2) last updated 2021/02/21 17:08:03 (GMT +800)
config file = /home/yen/tmp/ansible-bug/ansible.cfg
configured module search path = ['/home/yen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/yen/tmp/ansible/lib/ansible
ansible collection location = /home/yen/.ansible/collections:/usr/share/ansible/collections
executable location = /home/yen/.local/bin/ansible
python version = 3.9.1 (default, Feb 6 2021, 06:49:13) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
Using /home/yen/tmp/ansible-bug/ansible.cfg as config file
BECOME password:
setting up inventory plugins
host_list declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
script declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
auto declined parsing /home/yen/tmp/ansible-bug/inventory as it did not pass its verify_file() method
Set default localhost to localhost
Parsed /home/yen/tmp/ansible-bug/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/yen/tmp/ansible/lib/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
<localhost> Attempting python interpreter discovery
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: yen
<localhost> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
<localhost> Python interpreter discovery fallback (unable to get Linux distribution/version info)
Using module file /home/yen/tmp/ansible/lib/ansible/modules/ping.py
Pipelining is enabled.
<localhost> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=gocqypxznauqdyyqcjlbajubbcpgfkxz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gocqypxznauqdyyqcjlbajubbcpgfkxz ; /usr/bin/python'"'"' && sleep 0'
```
And then hangs forever.
|
https://github.com/ansible/ansible/issues/73672
|
https://github.com/ansible/ansible/pull/73688
|
8628c12f30693e520b6c7bcb816bbcbbbe0cd5bb
|
96905120698e3118d8bafaee5ebe8f83d2bbd607
| 2021-02-21T09:44:55Z |
python
| 2021-02-25T20:08:11Z |
lib/ansible/plugins/doc_fragments/connection_pipelining.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,448 |
The module `file` changes the relative `path` of a symlink to absolute
|
### SUMMARY
The `file` module changes the `path` of a relative symlink to absolute path.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
file
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
```
# (empty)
```
##### OS / ENVIRONMENT
controller: mac
target: centos:8
##### STEPS TO REPRODUCE
Prepare a container:
```
# Start a container:
docker run -ti centos:8 /bin/bash
# Verify /sbin:
ls -ld /sbin
lrwxrwxrwx 1 root root 8 May 11 2019 /sbin -> usr/sbin
```
Run ansible on the container:
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w" -vvv
...
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
...
# Verify /sbin again:
ls -ld /sbin
lrwxrwxrwx 1 root root 9 Dec 3 08:13 /sbin -> /usr/sbin
```
##### EXPECTED RESULTS
I was hoping that `file` would follow the symlink '/sbin' and change the target only. In the example above you can see that the link itself is modified.
##### ACTUAL RESULTS
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w follow=yes" -vvv
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
Using /Users/robertguy/.ansible.cfg as config file
Parsed de83dc6fa5ee, inventory source with host_list plugin
META: ran handlers
<de83dc6fa5ee> ESTABLISH DOCKER CONNECTION FOR USER: root
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'echo ~ && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" && echo ansible-tmp-1575362596.14-246033153918252="` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" ) && sleep 0\'']
<de83dc6fa5ee> Attempting python interpreter discovery
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /Library/Python/2.7/site-packages/ansible/modules/files/file.py
<de83dc6fa5ee> PUT /Users/robertguy/.ansible/tmp/ansible-local-18934l8TrvG/tmp_uA5WD TO /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ > /dev/null 2>&1 && sleep 0'"]
de83dc6fa5ee | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "o-w",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/sbin",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "link",
"unsafe_writes": null
}
},
"mode": "0777",
"owner": "root",
"size": 9,
"src": "/usr/sbin",
"state": "link",
"uid": 0
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65448
|
https://github.com/ansible/ansible/pull/73700
|
176beddb3f8d4c4774da1286c712936a0e859e10
|
e804fccf1c3fc94f35fac42ec8980eea0b431aa6
| 2019-12-03T08:46:59Z |
python
| 2021-03-01T14:14:03Z |
changelogs/fragments/73700-let-file-module-not-change-link-to-absolute-on-touch.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,448 |
The module `file` changes the relative `path` of a symlink to absolute
|
### SUMMARY
The `file` module changes the `path` of a relative symlink to absolute path.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
file
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
```
# (empty)
```
##### OS / ENVIRONMENT
controller: mac
target: centos:8
##### STEPS TO REPRODUCE
Prepare a container:
```
# Start a container:
docker run -ti centos:8 /bin/bash
# Verify /sbin:
ls -ld /sbin
lrwxrwxrwx 1 root root 8 May 11 2019 /sbin -> usr/sbin
```
Run ansible on the container:
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w" -vvv
...
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
...
# Verify /sbin again:
ls -ld /sbin
lrwxrwxrwx 1 root root 9 Dec 3 08:13 /sbin -> /usr/sbin
```
##### EXPECTED RESULTS
I was hoping that `file` would follow the symlink '/sbin' and change the target only. In the example above you can see that the link itself is modified.
##### ACTUAL RESULTS
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w follow=yes" -vvv
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
Using /Users/robertguy/.ansible.cfg as config file
Parsed de83dc6fa5ee, inventory source with host_list plugin
META: ran handlers
<de83dc6fa5ee> ESTABLISH DOCKER CONNECTION FOR USER: root
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'echo ~ && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" && echo ansible-tmp-1575362596.14-246033153918252="` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" ) && sleep 0\'']
<de83dc6fa5ee> Attempting python interpreter discovery
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /Library/Python/2.7/site-packages/ansible/modules/files/file.py
<de83dc6fa5ee> PUT /Users/robertguy/.ansible/tmp/ansible-local-18934l8TrvG/tmp_uA5WD TO /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ > /dev/null 2>&1 && sleep 0'"]
de83dc6fa5ee | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "o-w",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/sbin",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "link",
"unsafe_writes": null
}
},
"mode": "0777",
"owner": "root",
"size": 9,
"src": "/usr/sbin",
"state": "link",
"uid": 0
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65448
|
https://github.com/ansible/ansible/pull/73700
|
176beddb3f8d4c4774da1286c712936a0e859e10
|
e804fccf1c3fc94f35fac42ec8980eea0b431aa6
| 2019-12-03T08:46:59Z |
python
| 2021-03-01T14:14:03Z |
lib/ansible/modules/file.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: file
version_added: historical
short_description: Manage files and file properties
extends_documentation_fragment: files
description:
- Set attributes of files, symlinks or directories.
- Alternatively, remove files, symlinks or directories.
- Many other modules support the same options as the C(file) module - including M(ansible.builtin.copy),
M(ansible.builtin.template), and M(ansible.builtin.assemble).
- For Windows targets, use the M(ansible.windows.win_file) module instead.
options:
path:
description:
- Path to the file being managed.
type: path
required: yes
aliases: [ dest, name ]
state:
description:
- If C(absent), directories will be recursively deleted, and files or symlinks will
be unlinked. In the case of a directory, if C(diff) is declared, you will see the files and folders deleted listed
under C(path_contents). Note that C(absent) will not cause C(file) to fail if the C(path) does
not exist as the state did not change.
- If C(directory), all intermediate subdirectories will be created if they
do not exist. Since Ansible 1.7 they will be created with the supplied permissions.
- If C(file), without any other options this works mostly as a 'stat' and will return the current state of C(path).
Even with other options (i.e C(mode)), the file will be modified but will NOT be created if it does not exist;
see the C(touch) value or the M(ansible.builtin.copy) or M(ansible.builtin.template) module if you want that behavior.
- If C(hard), the hard link will be created or changed.
- If C(link), the symbolic link will be created or changed.
- If C(touch) (new in 1.4), an empty file will be created if the C(path) does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way C(touch) works from the command line).
type: str
default: file
choices: [ absent, directory, file, hard, link, touch ]
src:
description:
- Path of the file to link to.
- This applies only to C(state=link) and C(state=hard).
- For C(state=link), this will also accept a non-existing path.
- Relative paths are relative to the file being created (C(path)) which is how
the Unix command C(ln -s SRC DEST) treats relative paths.
type: path
recurse:
description:
- Recursively set the specified file attributes on directory contents.
- This applies only when C(state) is set to C(directory).
type: bool
default: no
version_added: '1.1'
force:
description:
- >
Force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
C(path) file and create symlink to the C(src) file in place of it).
type: bool
default: no
follow:
description:
- This flag indicates that filesystem links, if they exist, should be followed.
- Previous to Ansible 2.5, this was C(no) by default.
type: bool
default: yes
version_added: '1.8'
modification_time:
description:
- This parameter indicates the time the file's modification time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is None meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: "2.7"
modification_time_format:
description:
- When used with C(modification_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
access_time:
description:
- This parameter indicates the time the file's access time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is C(None) meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: '2.7'
access_time_format:
description:
- When used with C(access_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.copy
- module: ansible.builtin.stat
- module: ansible.builtin.template
- module: ansible.windows.win_file
notes:
- Supports C(check_mode).
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Change file ownership, group and permissions
ansible.builtin.file:
path: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Give insecure permissions to an existing file
ansible.builtin.file:
path: /work
owner: root
group: root
mode: '1777'
- name: Create a symbolic link
ansible.builtin.file:
src: /file/to/link/to
dest: /path/to/symlink
owner: foo
group: foo
state: link
- name: Create two hard links
ansible.builtin.file:
src: '/tmp/{{ item.src }}'
dest: '{{ item.dest }}'
state: hard
loop:
- { src: x, dest: y }
- { src: z, dest: k }
- name: Touch a file, using symbolic modes to set the permissions (equivalent to 0644)
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u=rw,g=r,o=r
- name: Touch the same file, but add/remove some permissions
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
- name: Touch again the same file, but do not change times this makes the task idempotent
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
modification_time: preserve
access_time: preserve
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /etc/some_directory
state: directory
mode: '0755'
- name: Update modification and access time of given file
ansible.builtin.file:
path: /etc/some_file
state: file
modification_time: now
access_time: now
- name: Set access time based on seconds from epoch value
ansible.builtin.file:
path: /etc/another_file
state: file
access_time: '{{ "%Y%m%d%H%M.%S" | strftime(stat_var.stat.atime) }}'
- name: Recursively change ownership of a directory
ansible.builtin.file:
path: /etc/foo
state: directory
recurse: yes
owner: foo
group: foo
- name: Remove file (delete file)
ansible.builtin.file:
path: /etc/foo.txt
state: absent
- name: Recursively remove directory
ansible.builtin.file:
path: /etc/foo
state: absent
'''
RETURN = r'''
dest:
description: Destination file/path, equal to the value passed to I(path).
returned: state=touch, state=hard, state=link
type: str
sample: /path/to/file.txt
path:
description: Destination file/path, equal to the value passed to I(path).
returned: state=absent, state=directory, state=file
type: str
sample: /path/to/file.txt
'''
import errno
import os
import shutil
import sys
import time
from pwd import getpwnam, getpwuid
from grp import getgrnam, getgrgid
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
# There will only be a single AnsibleModule object per module
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
def __repr__(self):
return 'AnsibleModuleError(results={0})'.format(self.results)
class ParameterError(AnsibleModuleError):
pass
class Sentinel(object):
def __new__(cls, *args, **kwargs):
return cls
def _ansible_excepthook(exc_type, exc_value, tb):
# Using an exception allows us to catch it if the calling code knows it can recover
if issubclass(exc_type, AnsibleModuleError):
module.fail_json(**exc_value.results)
else:
sys.__excepthook__(exc_type, exc_value, tb)
def additional_parameter_handling(params):
"""Additional parameter validation and reformatting"""
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link? Why don't we exclude directory? Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent: Remove the directory
# if state == touch: Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file: place inside of the directory (use _original_basename)
# if state == link: place inside of the directory (use _original_basename. Fallback to src?)
# if state == hard: place inside of the directory (use _original_basename. Fallback to src?)
if (params['state'] not in ("link", "absent") and os.path.isdir(to_bytes(params['path'], errors='surrogate_or_strict'))):
basename = None
if params['_original_basename']:
basename = params['_original_basename']
elif params['src']:
basename = os.path.basename(params['src'])
if basename:
params['path'] = os.path.join(params['path'], basename)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
prev_state = get_state(to_bytes(params['path'], errors='surrogate_or_strict'))
if params['state'] is None:
if prev_state != 'absent':
params['state'] = prev_state
elif params['recurse']:
params['state'] = 'directory'
else:
params['state'] = 'file'
# make sure the target path is a directory when we're doing a recursive operation
if params['recurse'] and params['state'] != 'directory':
raise ParameterError(results={"msg": "recurse option requires state to be 'directory'",
"path": params["path"]})
# Fail if 'src' but no 'state' is specified
if params['src'] and params['state'] not in ('link', 'hard'):
raise ParameterError(results={'msg': "src option requires state to be 'link' or 'hard'",
'path': params['path']})
def get_state(path):
''' Find out current state '''
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
if os.path.lexists(b_path):
if os.path.islink(b_path):
return 'link'
elif os.path.isdir(b_path):
return 'directory'
elif os.stat(b_path).st_nlink > 1:
return 'hard'
# could be many other things, but defaulting to file
return 'file'
return 'absent'
except OSError as e:
if e.errno == errno.ENOENT: # It may already have been removed
return 'absent'
else:
raise
# This should be moved into the common file utilities
def recursive_set_attributes(b_path, follow, file_args, mtime, atime):
changed = False
try:
for b_root, b_dirs, b_files in os.walk(b_path):
for b_fsobj in b_dirs + b_files:
b_fsname = os.path.join(b_root, b_fsobj)
if not os.path.islink(b_fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
else:
# Change perms on the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
if follow:
b_fsname = os.path.join(b_root, os.readlink(b_fsname))
# The link target could be nonexistent
if os.path.exists(b_fsname):
if os.path.isdir(b_fsname):
# Link is a directory so change perms on the directory's contents
changed |= recursive_set_attributes(b_fsname, follow, file_args, mtime, atime)
# Change perms on the file pointed to by the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
except RuntimeError as e:
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
raise AnsibleModuleError(
results={'msg': "Could not recursively set attributes on %s. Original error was: '%s'" % (to_native(b_path), to_native(e))}
)
return changed
def initial_diff(path, state, prev_state):
diff = {'before': {'path': path},
'after': {'path': path},
}
if prev_state != state:
diff['before']['state'] = prev_state
diff['after']['state'] = state
if state == 'absent' and prev_state == 'directory':
walklist = {
'directories': [],
'files': [],
}
b_path = to_bytes(path, errors='surrogate_or_strict')
for base_path, sub_folders, files in os.walk(b_path):
for folder in sub_folders:
folderpath = os.path.join(base_path, folder)
walklist['directories'].append(folderpath)
for filename in files:
filepath = os.path.join(base_path, filename)
walklist['files'].append(filepath)
diff['before']['path_content'] = walklist
return diff
#
# States
#
def get_timestamp_for_time(formatted_time, time_format):
if formatted_time == 'preserve':
return None
elif formatted_time == 'now':
return Sentinel
else:
try:
struct = time.strptime(formatted_time, time_format)
struct_time = time.mktime(struct)
except (ValueError, OverflowError) as e:
raise AnsibleModuleError(results={'msg': 'Error while obtaining timestamp for time %s using format %s: %s'
% (formatted_time, time_format, to_native(e, nonstring='simplerepr'))})
return struct_time
def update_timestamp_for_file(path, mtime, atime, diff=None):
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
if mtime is Sentinel and atime is Sentinel:
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
mtime = atime = time.time()
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
set_time = None
else:
# If both parameters are None 'preserve', nothing to do
if mtime is None and atime is None:
return False
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
if mtime is None:
mtime = previous_mtime
elif mtime is Sentinel:
mtime = time.time()
if atime is None:
atime = previous_atime
elif atime is Sentinel:
atime = time.time()
# If both timestamps are already ok, nothing to do
if mtime == previous_mtime and atime == previous_atime:
return False
set_time = (atime, mtime)
os.utime(b_path, set_time)
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
if 'after' not in diff:
diff['after'] = {}
if mtime != previous_mtime:
diff['before']['mtime'] = previous_mtime
diff['after']['mtime'] = mtime
if atime != previous_atime:
diff['before']['atime'] = previous_atime
diff['after']['atime'] = atime
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while updating modification or access time: %s'
% to_native(e, nonstring='simplerepr'), 'path': path})
return True
def keep_backward_compatibility_on_timestamps(parameter, state):
if state in ['file', 'hard', 'directory', 'link'] and parameter is None:
return 'preserve'
elif state == 'touch' and parameter is None:
return 'now'
else:
return parameter
def execute_diff_peek(path):
"""Take a guess as to whether a file is a binary file"""
b_path = to_bytes(path, errors='surrogate_or_strict')
appears_binary = False
try:
with open(b_path, 'rb') as f:
head = f.read(8192)
except Exception:
# If we can't read the file, we're okay assuming it's text
pass
else:
if b"\x00" in head:
appears_binary = True
return appears_binary
def ensure_absent(path):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
result = {}
if prev_state != 'absent':
diff = initial_diff(path, 'absent', prev_state)
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(b_path, ignore_errors=False)
except Exception as e:
raise AnsibleModuleError(results={'msg': "rmtree failed: %s" % to_native(e)})
else:
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise AnsibleModuleError(results={'msg': "unlinking failed: %s " % to_native(e),
'path': path})
result.update({'path': path, 'changed': True, 'diff': diff, 'state': 'absent'})
else:
result.update({'path': path, 'changed': False, 'state': 'absent'})
return result
def execute_touch(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
changed = False
result = {'dest': path}
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if not module.check_mode:
if prev_state == 'absent':
# Create an empty file if the filename did not already exist
try:
open(b_path, 'wb').close()
changed = True
except (OSError, IOError) as e:
raise AnsibleModuleError(results={'msg': 'Error, could not touch target: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
# Update the attributes on the file
diff = initial_diff(path, 'touch', prev_state)
file_args = module.load_file_common_arguments(module.params)
try:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except SystemExit as e:
if e.code: # this is the exit code passed to sys.exit, not a constant -- pylint: disable=using-constant-test
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(b_path)
raise
result['changed'] = changed
result['diff'] = diff
return result
def ensure_file_attributes(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if prev_state != 'file':
if follow and prev_state == 'link':
# follow symlink and operate on original
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
prev_state = get_state(b_path)
file_args['path'] = path
if prev_state not in ('file', 'hard'):
# file is not absent and any other state is a conflict
raise AnsibleModuleError(results={'msg': 'file (%s) is %s, cannot continue' % (path, prev_state),
'path': path, 'state': prev_state})
diff = initial_diff(path, 'file', prev_state)
changed = module.set_fs_attributes_if_different(file_args, False, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_directory(path, follow, recurse, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# For followed symlinks, we need to operate on the target of the link
if follow and prev_state == 'link':
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
file_args['path'] = path
prev_state = get_state(b_path)
changed = False
diff = initial_diff(path, 'directory', prev_state)
if prev_state == 'absent':
# Create directory and assign permissions to it
if module.check_mode:
return {'path': path, 'changed': True, 'diff': diff}
curpath = ''
try:
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
b_curpath = to_bytes(curpath, errors='surrogate_or_strict')
if not os.path.exists(b_curpath):
try:
os.mkdir(b_curpath)
changed = True
except OSError as ex:
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
if not (ex.errno == errno.EEXIST and os.path.isdir(b_curpath)):
raise
tmp_file_args = file_args.copy()
tmp_file_args['path'] = curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except Exception as e:
raise AnsibleModuleError(results={'msg': 'There was an issue creating %s as requested:'
' %s' % (curpath, to_native(e)),
'path': path})
return {'path': path, 'changed': changed, 'diff': diff}
elif prev_state != 'directory':
# We already know prev_state is not 'absent', therefore it exists in some form.
raise AnsibleModuleError(results={'msg': '%s already exists as a %s' % (path, prev_state),
'path': path})
#
# previous state == directory
#
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
if recurse:
changed |= recursive_set_attributes(b_path, follow, file_args, mtime, atime)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_symlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is None:
if follow:
# use the current target of the link as the source
src = to_native(os.path.realpath(b_path), errors='strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
if not os.path.islink(b_path) and os.path.isdir(b_path):
relpath = path
else:
b_relpath = os.path.dirname(b_path)
relpath = to_native(b_relpath, errors='strict')
absrc = os.path.join(relpath, src)
b_absrc = to_bytes(absrc, errors='surrogate_or_strict')
if not force and not os.path.exists(b_absrc):
raise AnsibleModuleError(results={'msg': 'src file does not exist, use "force=yes" if you'
' really want to create the link: %s' % absrc,
'path': path, 'src': src})
if prev_state == 'directory':
if not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
elif os.listdir(b_path):
# refuse to replace a directory that has files in it
raise AnsibleModuleError(results={'msg': 'the directory %s is not empty, refusing to'
' convert it' % path,
'path': path})
elif prev_state in ('file', 'hard') and not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
diff = initial_diff(path, 'link', prev_state)
changed = False
if prev_state in ('hard', 'file', 'directory', 'absent'):
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
os.rmdir(b_path)
os.symlink(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.symlink(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
file_args = module.load_file_common_arguments(module.params)
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it. Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
if follow and os.path.islink(b_path) and not os.path.exists(file_args['path']):
module.warn('Cannot set fs attributes on a non-existent symlink target. follow should be'
' set to False to avoid this.')
else:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def ensure_hardlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# src is the source of a hardlink. We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
if src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new hardlinks'})
if not os.path.exists(b_src):
raise AnsibleModuleError(results={'msg': 'src does not exist', 'dest': path, 'src': src})
diff = initial_diff(path, 'hard', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
if not os.stat(b_path).st_ino == os.stat(b_src).st_ino:
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, different hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, %s exists at destination' % prev_state,
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if os.stat(b_path).st_ino == os.stat(b_src).st_ino:
return {'path': path, 'changed': False}
elif not force:
raise AnsibleModuleError(results={'msg': 'Cannot link: different hard link exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
if os.path.exists(b_path):
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise
os.link(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.link(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def check_owner_exists(module, owner):
try:
uid = int(owner)
try:
getpwuid(uid).pw_name
except KeyError:
module.warn('failed to look up user with uid %s. Create user up to this point in real play' % uid)
except ValueError:
try:
getpwnam(owner).pw_uid
except KeyError:
module.warn('failed to look up user %s. Create user up to this point in real play' % owner)
def check_group_exists(module, group):
try:
gid = int(group)
try:
getgrgid(gid).gr_name
except KeyError:
module.warn('failed to look up group with gid %s. Create group up to this point in real play' % gid)
except ValueError:
try:
getgrnam(group).gr_gid
except KeyError:
module.warn('failed to look up group %s. Create group up to this point in real play' % group)
def main():
global module
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['absent', 'directory', 'file', 'hard', 'link', 'touch']),
path=dict(type='path', required=True, aliases=['dest', 'name']),
_original_basename=dict(type='str'), # Internal use only, for recursive ops
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False), # Note: Should not be in file_common_args in future
follow=dict(type='bool', default=True), # Note: Different default than file_common_args
_diff_peek=dict(type='bool'), # Internal use only, for internal checks in the action plugins
src=dict(type='path'), # Note: Should not be in file_common_args in future
modification_time=dict(type='str'),
modification_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
access_time=dict(type='str'),
access_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
),
add_file_common_args=True,
supports_check_mode=True,
)
# When we rewrite basic.py, we will do something similar to this on instantiating an AnsibleModule
sys.excepthook = _ansible_excepthook
additional_parameter_handling(module.params)
params = module.params
state = params['state']
recurse = params['recurse']
force = params['force']
follow = params['follow']
path = params['path']
src = params['src']
if module.check_mode and state != 'absent':
file_args = module.load_file_common_arguments(module.params)
if file_args['owner']:
check_owner_exists(module, file_args['owner'])
if file_args['group']:
check_group_exists(module, file_args['group'])
timestamps = {}
timestamps['modification_time'] = keep_backward_compatibility_on_timestamps(params['modification_time'], state)
timestamps['modification_time_format'] = params['modification_time_format']
timestamps['access_time'] = keep_backward_compatibility_on_timestamps(params['access_time'], state)
timestamps['access_time_format'] = params['access_time_format']
# short-circuit for diff_peek
if params['_diff_peek'] is not None:
appears_binary = execute_diff_peek(to_bytes(path, errors='surrogate_or_strict'))
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
if state == 'file':
result = ensure_file_attributes(path, follow, timestamps)
elif state == 'directory':
result = ensure_directory(path, follow, recurse, timestamps)
elif state == 'link':
result = ensure_symlink(path, src, follow, force, timestamps)
elif state == 'hard':
result = ensure_hardlink(path, src, follow, force, timestamps)
elif state == 'touch':
result = execute_touch(path, follow, timestamps)
elif state == 'absent':
result = ensure_absent(path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,448 |
The module `file` changes the relative `path` of a symlink to absolute
|
### SUMMARY
The `file` module changes the `path` of a relative symlink to absolute path.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
file
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
```
# (empty)
```
##### OS / ENVIRONMENT
controller: mac
target: centos:8
##### STEPS TO REPRODUCE
Prepare a container:
```
# Start a container:
docker run -ti centos:8 /bin/bash
# Verify /sbin:
ls -ld /sbin
lrwxrwxrwx 1 root root 8 May 11 2019 /sbin -> usr/sbin
```
Run ansible on the container:
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w" -vvv
...
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
...
# Verify /sbin again:
ls -ld /sbin
lrwxrwxrwx 1 root root 9 Dec 3 08:13 /sbin -> /usr/sbin
```
##### EXPECTED RESULTS
I was hoping that `file` would follow the symlink '/sbin' and change the target only. In the example above you can see that the link itself is modified.
##### ACTUAL RESULTS
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w follow=yes" -vvv
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
Using /Users/robertguy/.ansible.cfg as config file
Parsed de83dc6fa5ee, inventory source with host_list plugin
META: ran handlers
<de83dc6fa5ee> ESTABLISH DOCKER CONNECTION FOR USER: root
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'echo ~ && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" && echo ansible-tmp-1575362596.14-246033153918252="` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" ) && sleep 0\'']
<de83dc6fa5ee> Attempting python interpreter discovery
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /Library/Python/2.7/site-packages/ansible/modules/files/file.py
<de83dc6fa5ee> PUT /Users/robertguy/.ansible/tmp/ansible-local-18934l8TrvG/tmp_uA5WD TO /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ > /dev/null 2>&1 && sleep 0'"]
de83dc6fa5ee | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "o-w",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/sbin",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "link",
"unsafe_writes": null
}
},
"mode": "0777",
"owner": "root",
"size": 9,
"src": "/usr/sbin",
"state": "link",
"uid": 0
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65448
|
https://github.com/ansible/ansible/pull/73700
|
176beddb3f8d4c4774da1286c712936a0e859e10
|
e804fccf1c3fc94f35fac42ec8980eea0b431aa6
| 2019-12-03T08:46:59Z |
python
| 2021-03-01T14:14:03Z |
test/integration/targets/file/tasks/link_rewrite.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,448 |
The module `file` changes the relative `path` of a symlink to absolute
|
### SUMMARY
The `file` module changes the `path` of a relative symlink to absolute path.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
file
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
```
# (empty)
```
##### OS / ENVIRONMENT
controller: mac
target: centos:8
##### STEPS TO REPRODUCE
Prepare a container:
```
# Start a container:
docker run -ti centos:8 /bin/bash
# Verify /sbin:
ls -ld /sbin
lrwxrwxrwx 1 root root 8 May 11 2019 /sbin -> usr/sbin
```
Run ansible on the container:
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w" -vvv
...
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
...
# Verify /sbin again:
ls -ld /sbin
lrwxrwxrwx 1 root root 9 Dec 3 08:13 /sbin -> /usr/sbin
```
##### EXPECTED RESULTS
I was hoping that `file` would follow the symlink '/sbin' and change the target only. In the example above you can see that the link itself is modified.
##### ACTUAL RESULTS
```
ansible -i $(docker ps -ql), -c docker all -m file -a "path=/sbin mode=o-w follow=yes" -vvv
ansible 2.9.1
config file = /Users/robertguy/.ansible.cfg
configured module search path = [u'/Users/robertguy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
Using /Users/robertguy/.ansible.cfg as config file
Parsed de83dc6fa5ee, inventory source with host_list plugin
META: ran handlers
<de83dc6fa5ee> ESTABLISH DOCKER CONNECTION FOR USER: root
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'echo ~ && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" && echo ansible-tmp-1575362596.14-246033153918252="` echo /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252 `" ) && sleep 0\'']
<de83dc6fa5ee> Attempting python interpreter discovery
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u'/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /Library/Python/2.7/site-packages/ansible/modules/files/file.py
<de83dc6fa5ee> PUT /Users/robertguy/.ansible/tmp/ansible-local-18934l8TrvG/tmp_uA5WD TO /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/AnsiballZ_file.py && sleep 0'"]
<de83dc6fa5ee> EXEC ['/usr/local/bin/docker', 'exec', '-i', u'de83dc6fa5ee', u'/bin/sh', '-c', u"/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575362596.14-246033153918252/ > /dev/null 2>&1 && sleep 0'"]
de83dc6fa5ee | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": true,
"dest": "/sbin",
"diff": {
"after": {
"path": "/sbin",
"src": "/usr/sbin"
},
"before": {
"path": "/sbin",
"src": "usr/sbin"
}
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": true,
"force": false,
"group": null,
"mode": "o-w",
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/sbin",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "link",
"unsafe_writes": null
}
},
"mode": "0777",
"owner": "root",
"size": 9,
"src": "/usr/sbin",
"state": "link",
"uid": 0
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65448
|
https://github.com/ansible/ansible/pull/73700
|
176beddb3f8d4c4774da1286c712936a0e859e10
|
e804fccf1c3fc94f35fac42ec8980eea0b431aa6
| 2019-12-03T08:46:59Z |
python
| 2021-03-01T14:14:03Z |
test/integration/targets/file/tasks/state_link.yml
|
# file module tests for dealing with symlinks (state=link)
- name: Initialize the test output dir
include: initialize.yml
#
# Basic absolute symlink to a file
#
- name: create soft link to file
file: src={{output_file}} dest={{output_dir}}/soft.txt state=link
register: file1_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/soft.txt'
follow: False
register: file1_link_stat
- name: verify that the symlink was created correctly
assert:
that:
- 'file1_result is changed'
- 'file1_link_stat["stat"].islnk'
- 'file1_link_stat["stat"].lnk_target | expanduser == output_file | expanduser'
#
# Change an absolute soft link into a relative soft link
#
- name: change soft link to relative
file: src={{output_file|basename}} dest={{output_dir}}/soft.txt state=link
register: file2_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/soft.txt'
follow: False
register: file2_link_stat
- name: verify that the file was marked as changed
assert:
that:
- "file2_result is changed"
- "file2_result.diff.before.src == remote_file_expanded"
- "file2_result.diff.after.src == remote_file_expanded|basename"
- "file2_link_stat['stat'].islnk"
- "file2_link_stat['stat'].lnk_target == remote_file_expanded | basename"
#
# Check that creating the soft link a second time was idempotent
#
- name: soft link idempotency check
file: src={{output_file|basename}} dest={{output_dir}}/soft.txt state=link
register: file3_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/soft.txt'
follow: False
register: file3_link_stat
- name: verify that the file was not marked as changed
assert:
that:
- "not file3_result is changed"
- "file3_link_stat['stat'].islnk"
- "file3_link_stat['stat'].lnk_target == remote_file_expanded | basename"
#
# Test symlink to nonexistent files
#
- name: fail to create soft link to non existent file
file:
src: '/nonexistent'
dest: '{{output_dir}}/soft2.txt'
state: 'link'
force: False
register: file4_result
ignore_errors: true
- name: verify that link was not created
assert:
that:
- "file4_result is failed"
- name: force creation soft link to non existent
file:
src: '/nonexistent'
dest: '{{ output_dir}}/soft2.txt'
state: 'link'
force: True
register: file5_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/soft2.txt'
follow: False
register: file5_link_stat
- name: verify that link was created
assert:
that:
- "file5_result is changed"
- "file5_link_stat['stat'].islnk"
- "file5_link_stat['stat'].lnk_target == '/nonexistent'"
- name: Prove idempotence of force creation soft link to non existent
file:
src: '/nonexistent'
dest: '{{ output_dir }}/soft2.txt'
state: 'link'
force: True
register: file6a_result
- name: verify that the link to nonexistent is idempotent
assert:
that:
- "file6a_result.changed == false"
# In order for a symlink in a sticky world writable directory to be followed, it must
# either be owned by the follower,
# or the directory and symlink must have the same owner.
- name: symlink in sticky directory
block:
- name: Create remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
register: user
- name: Create a local temporary directory
tempfile:
state: directory
register: tempdir
- name: Set sticky bit
file:
path: '{{ tempdir.path }}'
mode: o=rwXt
- name: 'Check mode: force creation soft link in sticky directory owned by another user (mode is used)'
file:
src: '{{ user.home }}/nonexistent'
dest: '{{ tempdir.path }}/soft3.txt'
mode: 0640
state: 'link'
owner: '{{ remote_unprivileged_user }}'
force: true
follow: false
check_mode: true
register: missing_dst_no_follow_enable_force_use_mode1
- name: force creation soft link in sticky directory owned by another user (mode is used)
file:
src: '{{ user.home }}/nonexistent'
dest: '{{ tempdir.path }}/soft3.txt'
mode: 0640
state: 'link'
owner: '{{ remote_unprivileged_user }}'
force: true
follow: false
register: missing_dst_no_follow_enable_force_use_mode2
- name: Get stat info for the link
stat:
path: '{{ tempdir.path }}/soft3.txt'
follow: false
register: soft3_result
- name: 'Idempotence: force creation soft link in sticky directory owned by another user (mode is used)'
file:
src: '{{ user.home }}/nonexistent'
dest: '{{ tempdir.path }}/soft3.txt'
mode: 0640
state: 'link'
owner: '{{ remote_unprivileged_user }}'
force: yes
follow: false
register: missing_dst_no_follow_enable_force_use_mode3
always:
- name: Delete remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
state: absent
- name: Delete unprivileged user home and tempdir
file:
path: "{{ itemΒ }}"
state: absent
loop:
- '{{ tempdir.path }}'
- '{{ user.homeΒ }}'
- name: verify that link was created
assert:
that:
- "missing_dst_no_follow_enable_force_use_mode1 is changed"
- "missing_dst_no_follow_enable_force_use_mode2 is changed"
- "missing_dst_no_follow_enable_force_use_mode3 is not changed"
- "soft3_result['stat'].islnk"
- "soft3_result['stat'].lnk_target == '{{ user.homeΒ }}/nonexistent'"
#
# Test creating a link to a directory https://github.com/ansible/ansible/issues/1369
#
- name: create soft link to directory using absolute path
file:
src: '/'
dest: '{{ output_dir }}/root'
state: 'link'
register: file6_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/root'
follow: False
register: file6_link_stat
- name: Get stat info for the pointed to file
stat:
path: '{{ output_dir }}/root'
follow: True
register: file6_links_dest_stat
- name: Get stat info for the file we intend to point to
stat:
path: '/'
follow: False
register: file6_dest_stat
- name: verify that the link was created correctly
assert:
that:
# file command reports it created something
- "file6_result.changed == true"
# file command created a link
- 'file6_link_stat["stat"]["islnk"]'
# Link points to the right path
- 'file6_link_stat["stat"]["lnk_target"] == "/"'
# The link target and the file we intended to link to have the same inode
- 'file6_links_dest_stat["stat"]["inode"] == file6_dest_stat["stat"]["inode"]'
#
# Test creating a relative link
#
# Relative link to file
- name: create a test sub-directory to link to
file:
dest: '{{ output_dir }}/sub1'
state: 'directory'
- name: create a file to link to in the test sub-directory
file:
dest: '{{ output_dir }}/sub1/file1'
state: 'touch'
- name: create another test sub-directory to place links within
file:
dest: '{{output_dir}}/sub2'
state: 'directory'
- name: create soft link to relative file
file:
src: '../sub1/file1'
dest: '{{ output_dir }}/sub2/link1'
state: 'link'
register: file7_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/sub2/link1'
follow: False
register: file7_link_stat
- name: Get stat info for the pointed to file
stat:
path: '{{ output_dir }}/sub2/link1'
follow: True
register: file7_links_dest_stat
- name: Get stat info for the file we intend to point to
stat:
path: '{{ output_dir }}/sub1/file1'
follow: False
register: file7_dest_stat
- name: verify that the link was created correctly
assert:
that:
# file command reports it created something
- "file7_result.changed == true"
# file command created a link
- 'file7_link_stat["stat"]["islnk"]'
# Link points to the right path
- 'file7_link_stat["stat"]["lnk_target"] == "../sub1/file1"'
# The link target and the file we intended to link to have the same inode
- 'file7_links_dest_stat["stat"]["inode"] == file7_dest_stat["stat"]["inode"]'
# Relative link to directory
- name: create soft link to relative directory
file:
src: sub1
dest: '{{ output_dir }}/sub1-link'
state: 'link'
register: file8_result
- name: Get stat info for the link
stat:
path: '{{ output_dir }}/sub1-link'
follow: False
register: file8_link_stat
- name: Get stat info for the pointed to file
stat:
path: '{{ output_dir }}/sub1-link'
follow: True
register: file8_links_dest_stat
- name: Get stat info for the file we intend to point to
stat:
path: '{{ output_dir }}/sub1'
follow: False
register: file8_dest_stat
- name: verify that the link was created correctly
assert:
that:
# file command reports it created something
- "file8_result.changed == true"
# file command created a link
- 'file8_link_stat["stat"]["islnk"]'
# Link points to the right path
- 'file8_link_stat["stat"]["lnk_target"] == "sub1"'
# The link target and the file we intended to link to have the same inode
- 'file8_links_dest_stat["stat"]["inode"] == file8_dest_stat["stat"]["inode"]'
# test the file module using follow=yes, so that the target of a
# symlink is modified, rather than the link itself
- name: create a test file
copy:
dest: '{{output_dir}}/test_follow'
content: 'this is a test file\n'
mode: 0666
- name: create a symlink to the test file
file:
path: '{{output_dir}}/test_follow_link'
src: './test_follow'
state: 'link'
- name: modify the permissions on the link using follow=yes
file:
path: '{{output_dir}}/test_follow_link'
mode: 0644
follow: yes
register: file9_result
- name: stat the link target
stat:
path: '{{output_dir}}/test_follow'
register: file9_stat
- name: assert that the chmod worked
assert:
that:
- 'file9_result is changed'
- 'file9_stat["stat"]["mode"] == "0644"'
#
# Test modifying the permissions of a link itself
#
- name: attempt to modify the permissions of the link itself
file:
path: '{{output_dir}}/test_follow_link'
src: './test_follow'
state: 'link'
mode: 0600
follow: False
register: file10_result
# Whether the link itself changed is platform dependent! (BSD vs Linux?)
# Just check that the underlying file was not changed
- name: stat the link target
stat:
path: '{{output_dir}}/test_follow'
register: file10_target_stat
- name: assert that the link target was unmodified
assert:
that:
- 'file10_result is changed'
- 'file10_target_stat["stat"]["mode"] == "0644"'
# https://github.com/ansible/ansible/issues/56928
- block:
- name: Create a testing file
file:
path: "{{ output_dir }}/test_follow1"
state: touch
- name: Create a symlink and change mode of the original file, since follow == yes by default
file:
src: "{{ output_dir }}/test_follow1"
dest: "{{ output_dir }}/test_follow1_link"
state: link
mode: 0700
- name: stat the original file
stat:
path: "{{ output_dir }}/test_follow1"
register: stat_out
- name: Check if the mode of the original file was set
assert:
that:
- 'stat_out.stat.mode == "0700"'
always:
- name: Clean up
file:
path: "{{ item }}"
state: absent
loop:
- "{{ output_dir }}/test_follow1"
- "{{ output_dir }}/test_follow1_link"
# END #56928
# Test failure with src and no state parameter
- name: Specify src without state
file:
src: "{{ output_file }}"
dest: "{{ output_dir }}/link.txt"
ignore_errors: yes
register: src_state
- name: Ensure src without state failed
assert:
that:
- src_state is failed
- "'src option requires state to be' in src_state.msg"
# Test creating a symlink when the destination exists and is a file
- name: create a test file
copy:
dest: '{{ output_dir }}/file.txt'
content: 'this is a test file\n'
mode: 0666
- name: Create a symlink with dest already a file
file:
src: '{{ output_file }}'
dest: '{{ output_dir }}/file.txt'
state: link
ignore_errors: true
register: dest_is_existing_file_fail
- name: Stat to make sure the symlink was not created
stat:
path: '{{ output_dir }}/file.txt'
follow: false
register: dest_is_existing_file_fail_stat
- name: Forcefully a symlink with dest already a file
file:
src: '{{ output_file }}'
dest: '{{ output_dir }}/file.txt'
state: link
force: true
register: dest_is_existing_file_force
- name: Stat to make sure the symlink was created
stat:
path: '{{ output_dir }}/file.txt'
follow: false
register: dest_is_existing_file_force_stat
- assert:
that:
- dest_is_existing_file_fail is failed
- not dest_is_existing_file_fail_stat.stat.islnk
- dest_is_existing_file_force is changed
- dest_is_existing_file_force_stat.stat.exists
- dest_is_existing_file_force_stat.stat.islnk
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,529 |
ansible-console catches `KeyboardInterrupt` silently
|
---
name: β¨ Feature request
about: Suggest an idea for this project
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Leave an exit message when catching `KeyboardInterrupt` in ansible/lib/console
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/lib/ansible/cli/console.py`
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/cli/console.py#L112
`ansible-console` catches KeyboardInterrupt silently.
This may confuse new users, and a simple 'Exiting REPL!' before that `self.do_exit` would probably be helpful for someone trying the REPL for the first time.
I'll admit I was confused and didn't even realize I was back in my normal shell at first.
Alternatively, if this behavior could be made configurable with something as simple as a class or instance attribute or something that would amazing.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/68529
|
https://github.com/ansible/ansible/pull/73665
|
d0e991e892eb45c9de172fbb3987f9ae32ddfc3f
|
e7e3c12ad27b80686ebae5d43f85442299386565
| 2020-03-28T14:18:53Z |
python
| 2021-03-01T19:04:59Z |
changelogs/fragments/73665-fixes-ansible-console.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,529 |
ansible-console catches `KeyboardInterrupt` silently
|
---
name: β¨ Feature request
about: Suggest an idea for this project
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Leave an exit message when catching `KeyboardInterrupt` in ansible/lib/console
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/lib/ansible/cli/console.py`
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
https://github.com/ansible/ansible/blob/0f5a63f1b99e586f86932907c49b1e8877128957/lib/ansible/cli/console.py#L112
`ansible-console` catches KeyboardInterrupt silently.
This may confuse new users, and a simple 'Exiting REPL!' before that `self.do_exit` would probably be helpful for someone trying the REPL for the first time.
I'll admit I was confused and didn't even realize I was back in my normal shell at first.
Alternatively, if this behavior could be made configurable with something as simple as a class or instance attribute or something that would amazing.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/68529
|
https://github.com/ansible/ansible/pull/73665
|
d0e991e892eb45c9de172fbb3987f9ae32ddfc3f
|
e7e3c12ad27b80686ebae5d43f85442299386565
| 2020-03-28T14:18:53Z |
python
| 2021-03-01T19:04:59Z |
lib/ansible/cli/console.py
|
# Copyright: (c) 2014, Nandor Sivok <[email protected]>
# Copyright: (c) 2016, Redhat Inc
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
########################################################
# ansible-console is an interactive REPL shell for ansible
# with built-in tab completion for all the documented modules
#
# Available commands:
# cd - change host/group (you can use host patterns eg.: app*.dc*:!app01*)
# list - list available hosts in the current path
# forks - change fork
# become - become
# ! - forces shell module instead of the ansible module (!yum update -y)
import atexit
import cmd
import getpass
import readline
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.splitter import parse_kv
from ansible.playbook.play import Play
from ansible.plugins.loader import module_loader, fragment_loader
from ansible.utils import plugin_docs
from ansible.utils.color import stringc
from ansible.utils.display import Display
display = Display()
class ConsoleCLI(CLI, cmd.Cmd):
''' a REPL that allows for running ad-hoc tasks against a chosen inventory (based on dominis' ansible-shell).'''
modules = []
ARGUMENTS = {'host-pattern': 'A name of a group in the inventory, a shell-like glob '
'selecting hosts in inventory or any combination of the two separated by commas.'}
# use specific to console, but fallback to highlight for backwards compatibility
NORMAL_PROMPT = C.COLOR_CONSOLE_PROMPT or C.COLOR_HIGHLIGHT
def __init__(self, args):
super(ConsoleCLI, self).__init__(args)
self.intro = 'Welcome to the ansible console.\nType help or ? to list commands.\n'
self.groups = []
self.hosts = []
self.pattern = None
self.variable_manager = None
self.loader = None
self.passwords = dict()
self.modules = None
self.cwd = '*'
# Defaults for these are set from the CLI in run()
self.remote_user = None
self.become = None
self.become_user = None
self.become_method = None
self.check_mode = None
self.diff = None
self.forks = None
self.task_timeout = None
cmd.Cmd.__init__(self)
def init_parser(self):
super(ConsoleCLI, self).init_parser(
desc="REPL console for executing Ansible tasks.",
epilog="This is not a live session/connection, each task executes in the background and returns it's results."
)
opt_help.add_runas_options(self.parser)
opt_help.add_inventory_options(self.parser)
opt_help.add_connect_options(self.parser)
opt_help.add_check_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_fork_options(self.parser)
opt_help.add_module_options(self.parser)
opt_help.add_basedir_options(self.parser)
opt_help.add_runtask_options(self.parser)
opt_help.add_tasknoplay_options(self.parser)
# options unique to shell
self.parser.add_argument('pattern', help='host pattern', metavar='pattern', default='all', nargs='?')
self.parser.add_argument('--step', dest='step', action='store_true',
help="one-step-at-a-time: confirm each task before running")
def post_process_args(self, options):
options = super(ConsoleCLI, self).post_process_args(options)
display.verbosity = options.verbosity
self.validate_conflicts(options, runas_opts=True, fork_opts=True)
return options
def get_names(self):
return dir(self)
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
self.do_exit(self)
def set_prompt(self):
login_user = self.remote_user or getpass.getuser()
self.selected = self.inventory.list_hosts(self.cwd)
prompt = "%s@%s (%d)[f:%s]" % (login_user, self.cwd, len(self.selected), self.forks)
if self.become and self.become_user in [None, 'root']:
prompt += "# "
color = C.COLOR_ERROR
else:
prompt += "$ "
color = self.NORMAL_PROMPT
self.prompt = stringc(prompt, color, wrap_nonvisible_chars=True)
def list_modules(self):
modules = set()
if context.CLIARGS['module_path']:
for path in context.CLIARGS['module_path']:
if path:
module_loader.add_directory(path)
module_paths = module_loader._get_paths()
for path in module_paths:
if path is not None:
modules.update(self._find_modules_in_path(path))
return modules
def _find_modules_in_path(self, path):
if os.path.isdir(path):
for module in os.listdir(path):
if module.startswith('.'):
continue
elif os.path.isdir(module):
self._find_modules_in_path(module)
elif module.startswith('__'):
continue
elif any(module.endswith(x) for x in C.REJECT_EXTS):
continue
elif module in C.IGNORE_FILES:
continue
elif module.startswith('_'):
fullpath = '/'.join([path, module])
if os.path.islink(fullpath): # avoids aliases
continue
module = module.replace('_', '', 1)
module = os.path.splitext(module)[0] # removes the extension
yield module
def default(self, arg, forceshell=False):
""" actually runs modules """
if arg.startswith("#"):
return False
if not self.cwd:
display.error("No host found")
return False
if arg.split()[0] in self.modules:
module = arg.split()[0]
module_args = ' '.join(arg.split()[1:])
else:
module = 'shell'
module_args = arg
if forceshell is True:
module = 'shell'
module_args = arg
result = None
try:
check_raw = module in C._ACTION_ALLOWS_RAW_ARGS
task = dict(action=dict(module=module, args=parse_kv(module_args, check_raw=check_raw)), timeout=self.task_timeout)
play_ds = dict(
name="Ansible Shell",
hosts=self.cwd,
gather_facts='no',
tasks=[task],
remote_user=self.remote_user,
become=self.become,
become_user=self.become_user,
become_method=self.become_method,
check_mode=self.check_mode,
diff=self.diff,
)
play = Play().load(play_ds, variable_manager=self.variable_manager, loader=self.loader)
except Exception as e:
display.error(u"Unable to build command: %s" % to_text(e))
return False
try:
cb = 'minimal' # FIXME: make callbacks configurable
# now create a task queue manager to execute the play
self._tqm = None
try:
self._tqm = TaskQueueManager(
inventory=self.inventory,
variable_manager=self.variable_manager,
loader=self.loader,
passwords=self.passwords,
stdout_callback=cb,
run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS,
run_tree=False,
forks=self.forks,
)
result = self._tqm.run(play)
finally:
if self._tqm:
self._tqm.cleanup()
if self.loader:
self.loader.cleanup_all_tmp_files()
if result is None:
display.error("No hosts found")
return False
except KeyboardInterrupt:
display.error('User interrupted execution')
return False
except Exception as e:
display.error(to_text(e))
# FIXME: add traceback in very very verbose mode
return False
def emptyline(self):
return
def do_shell(self, arg):
"""
You can run shell commands through the shell module.
eg.:
shell ps uax | grep java | wc -l
shell killall python
shell halt -n
You can use the ! to force the shell module. eg.:
!ps aux | grep java | wc -l
"""
self.default(arg, True)
def do_forks(self, arg):
"""Set the number of forks"""
if not arg:
display.display('Usage: forks <number>')
return
forks = int(arg)
if forks <= 0:
display.display('forks must be greater than or equal to 1')
return
self.forks = forks
self.set_prompt()
do_serial = do_forks
def do_verbosity(self, arg):
"""Set verbosity level"""
if not arg:
display.display('Usage: verbosity <number>')
else:
try:
display.verbosity = int(arg)
display.v('verbosity level set to %s' % arg)
except (TypeError, ValueError) as e:
display.error('The verbosity must be a valid integer: %s' % to_text(e))
def do_cd(self, arg):
"""
Change active host/group. You can use hosts patterns as well eg.:
cd webservers
cd webservers:dbservers
cd webservers:!phoenix
cd webservers:&staging
cd webservers:dbservers:&staging:!phoenix
"""
if not arg:
self.cwd = '*'
elif arg in '/*':
self.cwd = 'all'
elif self.inventory.get_hosts(arg):
self.cwd = arg
else:
display.display("no host matched")
self.set_prompt()
def do_list(self, arg):
"""List the hosts in the current group"""
if arg == 'groups':
for group in self.groups:
display.display(group)
else:
for host in self.selected:
display.display(host.name)
def do_become(self, arg):
"""Toggle whether plays run with become"""
if arg:
self.become = boolean(arg, strict=False)
display.v("become changed to %s" % self.become)
self.set_prompt()
else:
display.display("Please specify become value, e.g. `become yes`")
def do_remote_user(self, arg):
"""Given a username, set the remote user plays are run by"""
if arg:
self.remote_user = arg
self.set_prompt()
else:
display.display("Please specify a remote user, e.g. `remote_user root`")
def do_become_user(self, arg):
"""Given a username, set the user that plays are run by when using become"""
if arg:
self.become_user = arg
else:
display.display("Please specify a user, e.g. `become_user jenkins`")
display.v("Current user is %s" % self.become_user)
self.set_prompt()
def do_become_method(self, arg):
"""Given a become_method, set the privilege escalation method when using become"""
if arg:
self.become_method = arg
display.v("become_method changed to %s" % self.become_method)
else:
display.display("Please specify a become_method, e.g. `become_method su`")
def do_check(self, arg):
"""Toggle whether plays run with check mode"""
if arg:
self.check_mode = boolean(arg, strict=False)
display.v("check mode changed to %s" % self.check_mode)
else:
display.display("Please specify check mode value, e.g. `check yes`")
def do_diff(self, arg):
"""Toggle whether plays run with diff"""
if arg:
self.diff = boolean(arg, strict=False)
display.v("diff mode changed to %s" % self.diff)
else:
display.display("Please specify a diff value , e.g. `diff yes`")
def do_timeout(self, arg):
"""Set the timeout"""
if arg:
try:
timeout = int(arg)
if timeout < 0:
display.error('The timeout must be greater than or equal to 1, use 0 to disable')
else:
self.task_timeout = timeout
except (TypeError, ValueError) as e:
display.error('The timeout must be a valid positive integer, or 0 to disable: %s' % to_text(e))
else:
display.display('Usage: timeout <seconds>')
def do_exit(self, args):
"""Exits from the console"""
sys.stdout.write('\n')
return -1
do_EOF = do_exit
def helpdefault(self, module_name):
if module_name in self.modules:
in_path = module_loader.find_plugin(module_name)
if in_path:
oc, a, _, _ = plugin_docs.get_docstring(in_path, fragment_loader)
if oc:
display.display(oc['short_description'])
display.display('Parameters:')
for opt in oc['options'].keys():
display.display(' ' + stringc(opt, self.NORMAL_PROMPT) + ' ' + oc['options'][opt]['description'][0])
else:
display.error('No documentation found for %s.' % module_name)
else:
display.error('%s is not a valid command, use ? to list all valid commands.' % module_name)
def complete_cd(self, text, line, begidx, endidx):
mline = line.partition(' ')[2]
offs = len(mline) - len(text)
if self.cwd in ('all', '*', '\\'):
completions = self.hosts + self.groups
else:
completions = [x.name for x in self.inventory.list_hosts(self.cwd)]
return [to_native(s)[offs:] for s in completions if to_native(s).startswith(to_native(mline))]
def completedefault(self, text, line, begidx, endidx):
if line.split()[0] in self.modules:
mline = line.split(' ')[-1]
offs = len(mline) - len(text)
completions = self.module_args(line.split()[0])
return [s[offs:] + '=' for s in completions if s.startswith(mline)]
def module_args(self, module_name):
in_path = module_loader.find_plugin(module_name)
oc, a, _, _ = plugin_docs.get_docstring(in_path, fragment_loader, is_module=True)
return list(oc['options'].keys())
def run(self):
super(ConsoleCLI, self).run()
sshpass = None
becomepass = None
# hosts
self.pattern = context.CLIARGS['pattern']
self.cwd = self.pattern
# Defaults from the command line
self.remote_user = context.CLIARGS['remote_user']
self.become = context.CLIARGS['become']
self.become_user = context.CLIARGS['become_user']
self.become_method = context.CLIARGS['become_method']
self.check_mode = context.CLIARGS['check']
self.diff = context.CLIARGS['diff']
self.forks = context.CLIARGS['forks']
self.task_timeout = context.CLIARGS['task_timeout']
# dynamically add modules as commands
self.modules = self.list_modules()
for module in self.modules:
setattr(self, 'do_' + module, lambda arg, module=module: self.default(module + ' ' + arg))
setattr(self, 'help_' + module, lambda module=module: self.helpdefault(module))
(sshpass, becomepass) = self.ask_passwords()
self.passwords = {'conn_pass': sshpass, 'become_pass': becomepass}
self.loader, self.inventory, self.variable_manager = self._play_prereqs()
hosts = self.get_host_list(self.inventory, context.CLIARGS['subset'], self.pattern)
self.groups = self.inventory.list_groups()
self.hosts = [x.name for x in hosts]
# This hack is to work around readline issues on a mac:
# http://stackoverflow.com/a/7116997/541202
if 'libedit' in readline.__doc__:
readline.parse_and_bind("bind ^I rl_complete")
else:
readline.parse_and_bind("tab: complete")
histfile = os.path.join(os.path.expanduser("~"), ".ansible-console_history")
try:
readline.read_history_file(histfile)
except IOError:
pass
atexit.register(readline.write_history_file, histfile)
self.set_prompt()
self.cmdloop()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,950 |
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'AnsibleUnicode' and 'int'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-inventory` fails with an unexpected exception if it encounters a dict in host vars that contains both numbers and letters as keys.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
[:~/git/ansible/ansible] [venv] devel* Β± lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 31 (Thirty One)
Release: 31
Codename: ThirtyOne
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```console
$Β echo test > inventory
$Β mkdir host_vars
$ echo -e 'x:\n a: 1\n 0: 1' > host_vars/test.yml
$Β ansible-inventory -i inventory --list
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Output like:
```json
{
"_meta": {
"hostvars": {
"test": {
"x": {
"a": 1,
"0": 1
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"test"
]
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible-inventory 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible-inventory
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
script declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
auto declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
Parsed /home/tore/git/ansible/ansible/inventory inventory source with ini plugin
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'int' and 'AnsibleUnicode'
the full traceback was:
Traceback (most recent call last):
File "/home/tore/git/ansible/ansible/bin/ansible-inventory", line 123, in <module>
exit_code = cli.run()
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 151, in run
results = self.dump(results)
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 185, in dump
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True)
File "/usr/lib64/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib64/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/usr/lib64/python3.7/json/encoder.py", line 353, in _iterencode_dict
items = sorted(dct.items(), key=lambda kv: kv[0])
TypeError: '<' not supported between instances of 'int' and 'AnsibleUnicode'
```
|
https://github.com/ansible/ansible/issues/68950
|
https://github.com/ansible/ansible/pull/73726
|
65140279573dd1f91b1134b7057711c46bac06ba
|
527bff6b79081b942c0ac9d0b1e306b99ffa81a6
| 2020-04-14T19:03:56Z |
python
| 2021-03-03T19:24:50Z |
changelogs/fragments/inv_json_sort_types_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,950 |
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'AnsibleUnicode' and 'int'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-inventory` fails with an unexpected exception if it encounters a dict in host vars that contains both numbers and letters as keys.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
[:~/git/ansible/ansible] [venv] devel* Β± lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 31 (Thirty One)
Release: 31
Codename: ThirtyOne
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```console
$Β echo test > inventory
$Β mkdir host_vars
$ echo -e 'x:\n a: 1\n 0: 1' > host_vars/test.yml
$Β ansible-inventory -i inventory --list
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Output like:
```json
{
"_meta": {
"hostvars": {
"test": {
"x": {
"a": 1,
"0": 1
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"test"
]
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible-inventory 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible-inventory
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
script declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
auto declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
Parsed /home/tore/git/ansible/ansible/inventory inventory source with ini plugin
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'int' and 'AnsibleUnicode'
the full traceback was:
Traceback (most recent call last):
File "/home/tore/git/ansible/ansible/bin/ansible-inventory", line 123, in <module>
exit_code = cli.run()
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 151, in run
results = self.dump(results)
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 185, in dump
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True)
File "/usr/lib64/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib64/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/usr/lib64/python3.7/json/encoder.py", line 353, in _iterencode_dict
items = sorted(dct.items(), key=lambda kv: kv[0])
TypeError: '<' not supported between instances of 'int' and 'AnsibleUnicode'
```
|
https://github.com/ansible/ansible/issues/68950
|
https://github.com/ansible/ansible/pull/73726
|
65140279573dd1f91b1134b7057711c46bac06ba
|
527bff6b79081b942c0ac9d0b1e306b99ffa81a6
| 2020-04-14T19:03:56Z |
python
| 2021-03-03T19:24:50Z |
lib/ansible/cli/inventory.py
|
# Copyright: (c) 2017, Brian Coca <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import argparse
from operator import attrgetter
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils._text import to_bytes, to_native
from ansible.utils.vars import combine_vars
from ansible.utils.display import Display
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
INTERNAL_VARS = frozenset(['ansible_diff_mode',
'ansible_config_file',
'ansible_facts',
'ansible_forks',
'ansible_inventory_sources',
'ansible_limit',
'ansible_playbook_python',
'ansible_run_tags',
'ansible_skip_tags',
'ansible_verbosity',
'ansible_version',
'inventory_dir',
'inventory_file',
'inventory_hostname',
'inventory_hostname_short',
'groups',
'group_names',
'omit',
'playbook_dir', ])
class InventoryCLI(CLI):
''' used to display or dump the configured inventory as Ansible sees it '''
ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list',
'group': 'The name of a group in the inventory, relevant when using --graph', }
def __init__(self, args):
super(InventoryCLI, self).__init__(args)
self.vm = None
self.loader = None
self.inventory = None
def init_parser(self):
super(InventoryCLI, self).init_parser(
usage='usage: %prog [options] [host|group]',
epilog='Show Ansible inventory information, by default it uses the inventory script JSON format')
opt_help.add_inventory_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_basedir_options(self.parser)
opt_help.add_runtask_options(self.parser)
# remove unused default options
self.parser.add_argument('-l', '--limit', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument, nargs='?')
self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument)
self.parser.add_argument('args', metavar='host|group', nargs='?')
# Actions
action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!")
action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script')
action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script')
action_group.add_argument("--graph", action="store_true", default=False, dest='graph',
help='create inventory graph, if supplying pattern it must be a valid group name')
self.parser.add_argument_group(action_group)
# graph
self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml',
help='Use YAML format instead of default JSON, ignored for --graph')
self.parser.add_argument('--toml', action='store_true', default=False, dest='toml',
help='Use TOML format instead of default JSON, ignored for --graph')
self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars',
help='Add vars to graph display, ignored unless used with --graph')
# list
self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export',
help="When doing an --list, represent in a way that is optimized for export,"
"not as an accurate representation of how Ansible has processed it")
self.parser.add_argument('--output', default=None, dest='output_file',
help="When doing --list, send the inventory to a file instead of to the screen")
# self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins',
# help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/")
def post_process_args(self, options):
options = super(InventoryCLI, self).post_process_args(options)
display.verbosity = options.verbosity
self.validate_conflicts(options)
# there can be only one! and, at least, one!
used = 0
for opt in (options.list, options.host, options.graph):
if opt:
used += 1
if used == 0:
raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.")
elif used > 1:
raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.")
# set host pattern to default if not supplied
if options.args:
options.pattern = options.args
else:
options.pattern = 'all'
return options
def run(self):
super(InventoryCLI, self).run()
# Initialize needed objects
self.loader, self.inventory, self.vm = self._play_prereqs()
results = None
if context.CLIARGS['host']:
hosts = self.inventory.get_hosts(context.CLIARGS['host'])
if len(hosts) != 1:
raise AnsibleOptionsError("You must pass a single valid host to --host parameter")
myvars = self._get_host_variables(host=hosts[0])
# FIXME: should we template first?
results = self.dump(myvars)
elif context.CLIARGS['graph']:
results = self.inventory_graph()
elif context.CLIARGS['list']:
top = self._get_group('all')
if context.CLIARGS['yaml']:
results = self.yaml_inventory(top)
elif context.CLIARGS['toml']:
results = self.toml_inventory(top)
else:
results = self.json_inventory(top)
results = self.dump(results)
if results:
outfile = context.CLIARGS['output_file']
if outfile is None:
# FIXME: pager?
display.display(results)
else:
try:
with open(to_bytes(outfile), 'wt') as f:
f.write(results)
except (OSError, IOError) as e:
raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e)))
sys.exit(0)
sys.exit(1)
@staticmethod
def dump(stuff):
if context.CLIARGS['yaml']:
import yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
results = yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False)
elif context.CLIARGS['toml']:
from ansible.plugins.inventory.toml import toml_dumps, HAS_TOML
if not HAS_TOML:
raise AnsibleError(
'The python "toml" library is required when using the TOML output format'
)
results = toml_dumps(stuff)
else:
import json
from ansible.parsing.ajson import AnsibleJSONEncoder
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True)
return results
def _get_group_variables(self, group):
# get info from inventory source
res = group.get_vars()
# Always load vars plugins
res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all'))
if context.CLIARGS['basedir']:
res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all'))
if group.priority != 1:
res['ansible_group_priority'] = group.priority
return self._remove_internal(res)
def _get_host_variables(self, host):
if context.CLIARGS['export']:
# only get vars defined directly host
hostvars = host.get_vars()
# Always load vars plugins
hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all'))
if context.CLIARGS['basedir']:
hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all'))
else:
# get all vars flattened by host, but skip magic hostvars
hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all')
return self._remove_internal(hostvars)
def _get_group(self, gname):
group = self.inventory.groups.get(gname)
return group
@staticmethod
def _remove_internal(dump):
for internal in INTERNAL_VARS:
if internal in dump:
del dump[internal]
return dump
@staticmethod
def _remove_empty(dump):
# remove empty keys
for x in ('hosts', 'vars', 'children'):
if x in dump and not dump[x]:
del dump[x]
@staticmethod
def _show_vars(dump, depth):
result = []
for (name, val) in sorted(dump.items()):
result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth))
return result
@staticmethod
def _graph_name(name, depth=0):
if depth:
name = " |" * (depth) + "--%s" % name
return name
def _graph_group(self, group, depth=0):
result = [self._graph_name('@%s:' % group.name, depth)]
depth = depth + 1
for kid in sorted(group.child_groups, key=attrgetter('name')):
result.extend(self._graph_group(kid, depth))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
result.append(self._graph_name(host.name, depth))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_host_variables(host), depth + 1))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_group_variables(group), depth))
return result
def inventory_graph(self):
start_at = self._get_group(context.CLIARGS['pattern'])
if start_at:
return '\n'.join(self._graph_group(start_at))
else:
raise AnsibleOptionsError("Pattern must be valid group name when using --graph")
def json_inventory(self, top):
seen = set()
def format_group(group):
results = {}
results[group.name] = {}
if group.name != 'all':
results[group.name]['hosts'] = [h.name for h in sorted(group.hosts, key=attrgetter('name'))]
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
results[group.name]['children'].append(subgroup.name)
if subgroup.name not in seen:
results.update(format_group(subgroup))
seen.add(subgroup.name)
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
# populate meta
results['_meta'] = {'hostvars': {}}
hosts = self.inventory.get_hosts()
for host in hosts:
hvars = self._get_host_variables(host)
if hvars:
results['_meta']['hostvars'][host.name] = hvars
return results
def yaml_inventory(self, top):
seen = []
def format_group(group):
results = {}
# initialize group + vars
results[group.name] = {}
# subgroups
results[group.name]['children'] = {}
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name != 'all':
results[group.name]['children'].update(format_group(subgroup))
# hosts for group
results[group.name]['hosts'] = {}
if group.name != 'all':
for h in sorted(group.hosts, key=attrgetter('name')):
myvars = {}
if h.name not in seen: # avoid defining host vars more than once
seen.append(h.name)
myvars = self._get_host_variables(host=h)
results[group.name]['hosts'][h.name] = myvars
if context.CLIARGS['export']:
gvars = self._get_group_variables(group)
if gvars:
results[group.name]['vars'] = gvars
self._remove_empty(results[group.name])
return results
return format_group(top)
def toml_inventory(self, top):
seen = set()
has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped'))
def format_group(group):
results = {}
results[group.name] = {}
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name == 'ungrouped' and not has_ungrouped:
continue
if group.name != 'all':
results[group.name]['children'].append(subgroup.name)
results.update(format_group(subgroup))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
if host.name not in seen:
seen.add(host.name)
host_vars = self._get_host_variables(host=host)
else:
host_vars = {}
try:
results[group.name]['hosts'][host.name] = host_vars
except KeyError:
results[group.name]['hosts'] = {host.name: host_vars}
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
return results
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,950 |
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'AnsibleUnicode' and 'int'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-inventory` fails with an unexpected exception if it encounters a dict in host vars that contains both numbers and letters as keys.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
[:~/git/ansible/ansible] [venv] devel* Β± lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 31 (Thirty One)
Release: 31
Codename: ThirtyOne
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```console
$Β echo test > inventory
$Β mkdir host_vars
$ echo -e 'x:\n a: 1\n 0: 1' > host_vars/test.yml
$Β ansible-inventory -i inventory --list
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Output like:
```json
{
"_meta": {
"hostvars": {
"test": {
"x": {
"a": 1,
"0": 1
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"test"
]
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible-inventory 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible-inventory
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
script declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
auto declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
Parsed /home/tore/git/ansible/ansible/inventory inventory source with ini plugin
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'int' and 'AnsibleUnicode'
the full traceback was:
Traceback (most recent call last):
File "/home/tore/git/ansible/ansible/bin/ansible-inventory", line 123, in <module>
exit_code = cli.run()
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 151, in run
results = self.dump(results)
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 185, in dump
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True)
File "/usr/lib64/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib64/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/usr/lib64/python3.7/json/encoder.py", line 353, in _iterencode_dict
items = sorted(dct.items(), key=lambda kv: kv[0])
TypeError: '<' not supported between instances of 'int' and 'AnsibleUnicode'
```
|
https://github.com/ansible/ansible/issues/68950
|
https://github.com/ansible/ansible/pull/73726
|
65140279573dd1f91b1134b7057711c46bac06ba
|
527bff6b79081b942c0ac9d0b1e306b99ffa81a6
| 2020-04-14T19:03:56Z |
python
| 2021-03-03T19:24:50Z |
test/integration/targets/inventory/inv_with_int.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,950 |
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'AnsibleUnicode' and 'int'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-inventory` fails with an unexpected exception if it encounters a dict in host vars that contains both numbers and letters as keys.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[:~/git/ansible/ansible] [venv] devel* Β± ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
[:~/git/ansible/ansible] [venv] devel* Β± lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 31 (Thirty One)
Release: 31
Codename: ThirtyOne
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```console
$Β echo test > inventory
$Β mkdir host_vars
$ echo -e 'x:\n a: 1\n 0: 1' > host_vars/test.yml
$Β ansible-inventory -i inventory --list
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Output like:
```json
{
"_meta": {
"hostvars": {
"test": {
"x": {
"a": 1,
"0": 1
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"test"
]
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible-inventory 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tore/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tore/git/ansible/ansible/lib/ansible
executable location = /home/tore/git/ansible/ansible/bin/ansible-inventory
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
script declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
auto declined parsing /home/tore/git/ansible/ansible/inventory as it did not pass its verify_file() method
Parsed /home/tore/git/ansible/ansible/inventory inventory source with ini plugin
ERROR! Unexpected Exception, this is probably a bug: '<' not supported between instances of 'int' and 'AnsibleUnicode'
the full traceback was:
Traceback (most recent call last):
File "/home/tore/git/ansible/ansible/bin/ansible-inventory", line 123, in <module>
exit_code = cli.run()
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 151, in run
results = self.dump(results)
File "/home/tore/git/ansible/ansible/lib/ansible/cli/inventory.py", line 185, in dump
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True)
File "/usr/lib64/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib64/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/usr/lib64/python3.7/json/encoder.py", line 353, in _iterencode_dict
items = sorted(dct.items(), key=lambda kv: kv[0])
TypeError: '<' not supported between instances of 'int' and 'AnsibleUnicode'
```
|
https://github.com/ansible/ansible/issues/68950
|
https://github.com/ansible/ansible/pull/73726
|
65140279573dd1f91b1134b7057711c46bac06ba
|
527bff6b79081b942c0ac9d0b1e306b99ffa81a6
| 2020-04-14T19:03:56Z |
python
| 2021-03-03T19:24:50Z |
test/integration/targets/inventory/runme.sh
|
#!/usr/bin/env bash
set -eux
empty_limit_file="$(mktemp)"
touch "${empty_limit_file}"
tmpdir="$(mktemp -d)"
cleanup() {
if [[ -f "${empty_limit_file}" ]]; then
rm -rf "${empty_limit_file}"
fi
rm -rf "$tmpdir"
}
trap 'cleanup' EXIT
# https://github.com/ansible/ansible/issues/52152
# Ensure that non-matching limit causes failure with rc 1
if ansible-playbook -i ../../inventory --limit foo playbook.yml; then
echo "Non-matching limit should cause failure"
exit 1
fi
# Ensure that non-existing limit file causes failure with rc 1
if ansible-playbook -i ../../inventory --limit @foo playbook.yml; then
echo "Non-existing limit file should cause failure"
exit 1
fi
if ! ansible-playbook -i ../../inventory --limit @"$tmpdir" playbook.yml 2>&1 | grep 'must be a file'; then
echo "Using a directory as a limit file should throw proper AnsibleError"
exit 1
fi
# Ensure that empty limit file does not cause IndexError #59695
ansible-playbook -i ../../inventory --limit @"${empty_limit_file}" playbook.yml
ansible-playbook -i ../../inventory "$@" strategy.yml
ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS=always ansible-playbook -i ../../inventory "$@" strategy.yml
ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS=never ansible-playbook -i ../../inventory "$@" strategy.yml
# test extra vars
ansible-inventory -i testhost, -i ./extra_vars_constructed.yml --list -e 'from_extras=hey ' "$@"|grep '"example": "hellohey"'
# Do not fail when all inventories fail to parse.
# Do not fail when any inventory fails to parse.
ANSIBLE_INVENTORY_UNPARSED_FAILED=False ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=False ansible -m ping localhost -i /idontexist "$@"
# Fail when all inventories fail to parse.
# Do not fail when just one inventory fails to parse.
if ANSIBLE_INVENTORY_UNPARSED_FAILED=True ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=False ansible -m ping localhost -i /idontexist; then
echo "All inventories failed/did not exist, should cause failure"
echo "ran with: ANSIBLE_INVENTORY_UNPARSED_FAILED=True ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=False"
exit 1
fi
# Same as above but ensuring no failure we *only* fail when all inventories fail to parse.
# Fail when all inventories fail to parse.
# Do not fail when just one inventory fails to parse.
ANSIBLE_INVENTORY_UNPARSED_FAILED=True ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=False ansible -m ping localhost -i /idontexist -i ../../inventory "$@"
# Fail when all inventories fail to parse.
# Do not fail when just one inventory fails to parse.
# Fail when any inventories fail to parse.
if ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=True ansible -m ping localhost -i /idontexist -i ../../inventory; then
echo "One inventory failed/did not exist, should NOT cause failure"
echo "ran with: ANSIBLE_INVENTORY_UNPARSED_FAILED=True ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=False"
exit 1
fi
# Ensure we don't throw when an empty directory is used as inventory
ansible-playbook -i "$tmpdir" playbook.yml
# Ensure we can use a directory of inventories
cp ../../inventory "$tmpdir"
ansible-playbook -i "$tmpdir" playbook.yml
# ... even if it contains another empty directory
mkdir "$tmpdir/empty"
ansible-playbook -i "$tmpdir" playbook.yml
if ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=True ansible -m ping localhost -i "$tmpdir"; then
echo "Empty directory should cause failure when ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=True"
exit 1
fi
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
changelogs/fragments/ssh_connection_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
import yaml
try:
import _yaml
HAS_LIBYAML = True
except ImportError:
HAS_LIBYAML = False
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value)
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}] ".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s" % ''.join(sys.version.splitlines()))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="verbose mode (-vvv for more, -vvvv to enable connection debugging)")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.config.get_config_value('PLAYBOOK_DIR'), dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory."
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
connect_group.add_argument('--ssh-common-args', default='', dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default='', dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default='', dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default='', dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
add_runas_prompt_options(parser, runas_group=runas_group)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is None:
runas_group = parser.add_argument_group("Privilege Escalation Options",
"control how and which user you become as on target hosts")
runas_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
parser.add_argument_group(runas_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
oracle: *rhelish
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
kwargs = {}
if PY3:
kwargs['inline_comment_prefixes'] = (';',)
self._parsers[cfile] = configparser.ConfigParser(**kwargs)
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
if value not in defs[config]['choices']:
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), defs[config]['choices']))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/playbook/play_context.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.compat.paramiko import paramiko
from ansible.module_utils.six import iteritems
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.plugins import get_plugin_class
from ansible.utils.display import Display
from ansible.plugins.loader import get_shell_plugin
from ansible.utils.ssh_functions import check_for_controlpersist
display = Display()
__all__ = ['PlayContext']
TASK_ATTRIBUTE_OVERRIDES = (
'become',
'become_user',
'become_pass',
'become_method',
'become_flags',
'connection',
'docker_extra_args', # TODO: remove
'delegate_to',
'no_log',
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_user',
'ansible_host',
'ansible_port',
# TODO: ???
'ansible_docker_extra_args',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_ssh_executable',
)
class PlayContext(Base):
'''
This class is used to consolidate the connection information for
hosts in a play and child tasks, where the task may override some
connection/authentication information.
'''
# base
_module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION)
_shell = FieldAttribute(isa='string')
_executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE)
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
_remote_addr = FieldAttribute(isa='string')
_password = FieldAttribute(isa='string')
_timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT)
_connection_user = FieldAttribute(isa='string')
_private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE)
_pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING)
# networking modules
_network_os = FieldAttribute(isa='string')
# docker FIXME: remove these
_docker_extra_args = FieldAttribute(isa='string')
# ssh # FIXME: remove these
_ssh_executable = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_EXECUTABLE)
_ssh_args = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_ARGS)
_ssh_common_args = FieldAttribute(isa='string')
_sftp_extra_args = FieldAttribute(isa='string')
_scp_extra_args = FieldAttribute(isa='string')
_ssh_extra_args = FieldAttribute(isa='string')
_ssh_transfer_method = FieldAttribute(isa='string', default=C.DEFAULT_SSH_TRANSFER_METHOD)
# ???
_connection_lockfd = FieldAttribute(isa='int')
# privilege escalation fields
_become = FieldAttribute(isa='bool')
_become_method = FieldAttribute(isa='string')
_become_user = FieldAttribute(isa='string')
_become_pass = FieldAttribute(isa='string')
_become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE)
_become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS)
_prompt = FieldAttribute(isa='string')
# general flags
_verbosity = FieldAttribute(isa='int', default=0)
_only_tags = FieldAttribute(isa='set', default=set)
_skip_tags = FieldAttribute(isa='set', default=set)
_start_at_task = FieldAttribute(isa='string')
_step = FieldAttribute(isa='bool', default=False)
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
_force_handlers = FieldAttribute(isa='bool', default=False)
def __init__(self, play=None, passwords=None, connection_lockfd=None):
# Note: play is really not optional. The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
super(PlayContext, self).__init__()
if passwords is None:
passwords = {}
self.password = passwords.get('conn_pass', '')
self.become_pass = passwords.get('become_pass', '')
self._become_plugin = None
self.prompt = ''
self.success_key = ''
# a file descriptor to be used during locking operations
self.connection_lockfd = connection_lockfd
# set options before play to allow play to override them
if context.CLIARGS:
self.set_attributes_from_cli()
if play:
self.set_attributes_from_play(play)
def set_attributes_from_plugin(self, plugin):
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
options = C.config.get_configuration_definitions(get_plugin_class(plugin), plugin._load_name)
for option in options:
if option:
flag = options[option].get('name')
if flag:
setattr(self, flag, self.connection.get_option(flag))
def set_attributes_from_play(self, play):
self.force_handlers = play.force_handlers
def set_attributes_from_cli(self):
'''
Configures this connection information instance with data from
options specified by the user on the command line. These have a
lower precedence than those set on the play or host.
'''
if context.CLIARGS.get('timeout', False):
self.timeout = int(context.CLIARGS['timeout'])
# From the command line. These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
self.private_key_file = context.CLIARGS.get('private_key_file') # Else default
self.verbosity = context.CLIARGS.get('verbosity') # Else default
self.ssh_common_args = context.CLIARGS.get('ssh_common_args') # Else default
self.ssh_extra_args = context.CLIARGS.get('ssh_extra_args') # Else default
self.sftp_extra_args = context.CLIARGS.get('sftp_extra_args') # Else default
self.scp_extra_args = context.CLIARGS.get('scp_extra_args') # Else default
# Not every cli that uses PlayContext has these command line args so have a default
self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default
def set_task_and_variable_override(self, task, variables, templar):
'''
Sets attributes from the task if they are set, which will override
those from the play.
:arg task: the task object with the parameters that were set on it
:arg variables: variables from inventory
:arg templar: templar instance if templating variables is needed
'''
new_info = self.copy()
# loop through a subset of attributes on the task object and set
# connection fields based on their values
for attr in TASK_ATTRIBUTE_OVERRIDES:
if hasattr(task, attr):
attr_val = getattr(task, attr)
if attr_val is not None:
setattr(new_info, attr, attr_val)
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
if task.delegate_to is not None:
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'):
if address_var in delegated_vars:
break
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars and delegated_vars[user_var]:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
# setup shell
for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'):
if exe_var in variables:
setattr(new_info, 'executable', variables.get(exe_var))
attrs_considered = []
for (attr, variable_names) in iteritems(C.MAGIC_VARIABLE_MAPPING):
for variable_name in variable_names:
if attr in attrs_considered:
continue
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
if task.delegate_to is not None:
if isinstance(delegated_vars, dict) and variable_name in delegated_vars:
setattr(new_info, attr, delegated_vars[variable_name])
attrs_considered.append(attr)
elif variable_name in variables:
setattr(new_info, attr, variables[variable_name])
attrs_considered.append(attr)
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'):
if become_pass_name in variables:
break
# make sure we get port defaults if needed
if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None:
new_info.port = int(C.DEFAULT_REMOTE_PORT)
# special overrides for the connection setting
if len(delegated_vars) > 0:
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if connection_type in delegated_vars:
break
else:
remote_addr_local = new_info.remote_addr in C.LOCALHOST
inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST
if remote_addr_local and inv_hostname_local:
setattr(new_info, 'connection', 'local')
elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local):
setattr(new_info, 'connection', C.DEFAULT_TRANSPORT)
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
if new_info.connection == 'local':
if not new_info.connection_user:
new_info.connection_user = new_info.remote_user
# set no_log to default if it was not previously set
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
if task.check_mode is not None:
new_info.check_mode = task.check_mode
if task.diff is not None:
new_info.diff = task.diff
return new_info
def set_become_plugin(self, plugin):
self._become_plugin = plugin
def make_become_cmd(self, cmd, executable=None):
""" helper function to create privilege escalation commands """
display.deprecated(
"PlayContext.make_become_cmd should not be used, the calling code should be using become plugins instead",
version="2.12", collection_name='ansible.builtin'
)
if not cmd or not self.become:
return cmd
become_method = self.become_method
# load/call become plugins here
plugin = self._become_plugin
if plugin:
options = {
'become_exe': self.become_exe or become_method,
'become_flags': self.become_flags or '',
'become_user': self.become_user,
'become_pass': self.become_pass
}
plugin.set_options(direct=options)
if not executable:
executable = self.executable
shell = get_shell_plugin(executable=executable)
cmd = plugin.build_become_command(cmd, shell)
# for backwards compat:
if self.become_pass:
self.prompt = plugin.prompt
else:
raise AnsibleError("Privilege escalation method not found: %s" % become_method)
return cmd
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
try:
if 'become' in prop:
continue
var_val = getattr(self, prop)
for var_opt in var_list:
if var_opt not in variables and var_val is not None:
variables[var_opt] = var_val
except AttributeError:
continue
def _get_attr_connection(self):
''' connections are special, this takes care of responding correctly '''
conn_type = None
if self._attributes['connection'] == 'smart':
conn_type = 'ssh'
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(self.ssh_executable) and paramiko is not None:
conn_type = "paramiko"
# if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems
elif self._attributes['connection'] == 'persistent' and paramiko is not None:
conn_type = 'paramiko'
if conn_type:
self.connection = conn_type
return self._attributes['connection']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
options:
host:
description: Hostname/ip to connect to.
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_ssh_host
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description: Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
retries:
# constant: ANSIBLE_SSH_RETRIES
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(C.ANSIBLE_SSH_RETRIES) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = C.ANSIBLE_SSH_CONTROL_PATH
self.control_path_dir = C.ANSIBLE_SSH_CONTROL_PATH_DIR
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
if conn_password:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
#
# Next, we add [ssh_connection]ssh_args from ansible.cfg.
#
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments controlled by configuration file settings
# (e.g. host_key_checking) or inventory variables (ansible_ssh_port) or
# a combination thereof.
if not C.HOST_KEY_CHECKING:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
if self._play_context.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self._play_context.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self._play_context.private_key_file
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
user = self._play_context.remote_user
if user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self._play_context.remote_user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(self._play_context.timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = getattr(self._play_context, opt, None)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"PlayContext set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self._play_context.timeout
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# filehandles is low, then switch to more efficient ones when higher.
# select is faster when filehandles is low.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if C.HOST_KEY_CHECKING:
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self._play_context.ssh_transfer_method
if ssh_transfer_method is not None:
if not (ssh_transfer_method in ('smart', 'sftp', 'scp', 'piped')):
raise AnsibleOptionsError('transfer_method needs to be one of [smart|sftp|scp|piped]')
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = C.DEFAULT_SCP_IF_SSH
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable') or self._play_context.ssh_executable
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
cmd = self._build_command(self.get_option('ssh_executable') or self._play_context.ssh_executable, 'ssh', '-O', 'stop', self.host)
controlpersist, controlpath = self._persistence_controls(cmd)
cp_arg = [a for a in cmd if a.startswith(b"ControlPath=")]
# only run the reset if the ControlPath already exists or if it isn't
# configured and ControlPersist is set
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
if run_reset:
display.vvv(u'sending stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/utils/ssh_functions.py
|
# (c) 2016, James Tanner
# (c) 2016, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import subprocess
from ansible import constants as C
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat.paramiko import paramiko
_HAS_CONTROLPERSIST = {}
def check_for_controlpersist(ssh_executable):
try:
# If we've already checked this executable
return _HAS_CONTROLPERSIST[ssh_executable]
except KeyError:
pass
b_ssh_exec = to_bytes(ssh_executable, errors='surrogate_or_strict')
has_cp = True
try:
cmd = subprocess.Popen([b_ssh_exec, '-o', 'ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if b"Bad configuration option" in err or b"Usage:" in err:
has_cp = False
except OSError:
has_cp = False
_HAS_CONTROLPERSIST[ssh_executable] = has_cp
return has_cp
def set_default_transport():
# deal with 'smart' connection .. one time ..
if C.DEFAULT_TRANSPORT == 'smart':
# TODO: check if we can deprecate this as ssh w/o control persist should
# not be as common anymore.
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(C.ANSIBLE_SSH_EXECUTABLE) and paramiko is not None:
C.DEFAULT_TRANSPORT = "paramiko"
else:
C.DEFAULT_TRANSPORT = "ssh"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
test/integration/targets/connection_windows_ssh/runme.sh
|
#!/usr/bin/env bash
set -eux
# We need to run these tests with both the powershell and cmd shell type
### cmd tests - no DefaultShell set ###
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=cmd" \
"$@"
# https://github.com/PowerShell/Win32-OpenSSH/wiki/DefaultShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell state=absent" \
"$@"
# Need to flush the connection to ensure we get a new shell for the next tests
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
# sftp
./windows.sh "$@"
# scp
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
# other tests not part of the generic connection test framework
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
### powershell tests - explicit DefaultShell set ###
# we do this last as the default shell on our CI instances is set to PowerShell
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=powershell" \
"$@"
# ensure the default shell is set to PowerShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell data=C:\\\\Windows\\\\System32\\\\WindowsPowerShell\\\\v1.0\\\\powershell.exe" \
"$@"
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
./windows.sh "$@"
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,341 |
ssh connection reset and ssh tokens in controlpath
|
##### SUMMARY
When investigating https://github.com/ansible-community/molecule-vagrant/issues/1, I found out that ``meta: reset_connection`` is not working due to the reset() function of the ssh.py connection plugin not being able to find the connection socket. It turns out that molecule is using ``control_path = %(directory)s/%%h-%%p-%%r``. For instance, it's translated into ``ControlPath=/home/vagrant/.ansible/cp/%h-%p-%r`` on the ssh command line.
The reset code does :
```
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
```
Due to the content of the ControlPath argument, it will set ``cp_path`` to ``/home/vagrant/.ansible/cp/%h-%p-%r`` and of course, the ``os.path.exists(cp_path)`` will fail, making "meta: reset_connection" useless.
fwiw, looks like this bug as been introduced by the fix for https://github.com/ansible/ansible/issues/42991
A crude workaround would be changing the test to ``if b'%' in cp_path or os.path.exists(cp_path)``. It may be better to interpolate the ssh tokens but I've no idea if it's really possible and how to do that.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ssh connection plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible --version
ansible 2.9.6
config file = /home/rtp/devel/hupstream/ansible/molecule-vagrant-zuul/t/ansible.cfg
configured module search path = ['/home/rtp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rtp/.local/lib/python3.7/site-packages/ansible
executable location = /home/rtp/.local/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
pipelining = True
```
##### OS / ENVIRONMENT
Debian stable
##### STEPS TO REPRODUCE
Since I was testing that for molecule-vagrant, I've a Vagrantfile and a converge.yml file:
Vagrantfile:
```
Vagrant.configure("2") do |c|
c.vm.define 'test2' do |v|
v.vm.hostname = 'test2'
v.vm.box = 'debian/buster64'
end
c.vm.provision :ansible do |ansible|
ansible.playbook = "converge.yml"
end
end
```
converge.yml
```
---
- name: Converge
hosts: all
gather_facts: false
tasks:
- name: Create test group
group:
name: testgroup
become: true
- name: Add vagrant user to test group
user:
name: vagrant
groups: testgroup
append: yes
become: true
- name: reset connection
meta: reset_connection
- name: Get vagrant user info
command: id -nG
register: user_grps
- name: Print user_grps
debug:
var: user_grps
- name: Check user in vagrant group
assert:
that:
- "'testgroup' in user_grps.stdout.split(' ')"
```
ansible.cfg
```
[ssh_connection]
control_path = %(directory)s/%%h-%%p-%%r
```
##### EXPECTED RESULTS
The run of the ``converge.yml`` should work
##### ACTUAL RESULTS
```
TASK [Check user in vagrant group] *********************************************
fatal: [test2]: FAILED! => {
"assertion": "'testgroup' in user_grps.stdout.split(' ')",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
```
|
https://github.com/ansible/ansible/issues/68341
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-03-19T14:35:31Z |
python
| 2021-03-03T20:25:16Z |
test/units/plugins/connection/test_ssh.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from io import StringIO
import pytest
from ansible import constants as C
from ansible.errors import AnsibleAuthenticationFailure
from units.compat import unittest
from units.compat.mock import patch, MagicMock, PropertyMock
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.compat.selectors import SelectorKey, EVENT_READ
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import ssh
from ansible.plugins.loader import connection_loader, become_loader
class TestConnectionBaseClass(unittest.TestCase):
def test_plugins_connection_ssh_module(self):
play_context = PlayContext()
play_context.prompt = (
'[sudo via ansible, key=ouzmdnewuhucvuaabtjmweasarviygqq] password: '
)
in_stream = StringIO()
self.assertIsInstance(ssh.Connection(play_context, in_stream), ssh.Connection)
def test_plugins_connection_ssh_basic(self):
pc = PlayContext()
new_stdin = StringIO()
conn = ssh.Connection(pc, new_stdin)
# connect just returns self, so assert that
res = conn._connect()
self.assertEqual(conn, res)
ssh.SSHPASS_AVAILABLE = False
self.assertFalse(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = True
self.assertTrue(conn._sshpass_available())
with patch('subprocess.Popen') as p:
ssh.SSHPASS_AVAILABLE = None
p.return_value = MagicMock()
self.assertTrue(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = None
p.return_value = None
p.side_effect = OSError()
self.assertFalse(conn._sshpass_available())
conn.close()
self.assertFalse(conn._connected)
def test_plugins_connection_ssh__build_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command('ssh', 'ssh')
def test_plugins_connection_ssh_exec_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._build_command.return_value = 'ssh something something'
conn._run = MagicMock()
conn._run.return_value = (0, 'stdout', 'stderr')
conn.get_option = MagicMock()
conn.get_option.return_value = True
res, stdout, stderr = conn.exec_command('ssh')
res, stdout, stderr = conn.exec_command('ssh', 'this is some data')
def test_plugins_connection_ssh__examine_output(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn.check_password_prompt = MagicMock()
conn.check_become_success = MagicMock()
conn.check_incorrect_password = MagicMock()
conn.check_missing_password = MagicMock()
def _check_password_prompt(line):
if b'foo' in line:
return True
return False
def _check_become_success(line):
if b'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz' in line:
return True
return False
def _check_incorrect_password(line):
if b'incorrect password' in line:
return True
return False
def _check_missing_password(line):
if b'bad password' in line:
return True
return False
conn.become.check_password_prompt = MagicMock(side_effect=_check_password_prompt)
conn.become.check_become_success = MagicMock(side_effect=_check_become_success)
conn.become.check_incorrect_password = MagicMock(side_effect=_check_incorrect_password)
conn.become.check_missing_password = MagicMock(side_effect=_check_missing_password)
# test examining output for prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = True
conn.become.prompt = True
def get_option(option):
if option == 'become_pass':
return 'password'
return None
conn.become.get_option = get_option
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nfoo\nline 3\nthis should be the remainder', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'this should be the remainder')
self.assertTrue(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
conn.become.success = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nBECOME-SUCCESS-abcdefghijklmnopqrstuvxyz\nline 3\n', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertTrue(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become failure
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nincorrect password\n', True)
self.assertEqual(output, b'line 1\nline 2\nincorrect password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertTrue(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for missing password
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nbad password\n', True)
self.assertEqual(output, b'line 1\nbad password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertTrue(conn._flags['become_nopasswd_error'])
@patch('time.sleep')
@patch('os.path.exists')
def test_plugins_connection_ssh_put_file(self, mock_ospe, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
mock_ospe.return_value = True
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'put',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
# test that a not-found path raises an error
mock_ospe.return_value = False
conn._bare_run.return_value = (0, 'stdout', '')
self.assertRaises(AnsibleFileNotFound, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
@patch('time.sleep')
def test_plugins_connection_ssh_fetch_file(self, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
conn._load_name = 'ssh'
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.set_options({})
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'get',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.fetch_file, '/path/to/bad/file', '/remote/path/to/file')
class MockSelector(object):
def __init__(self):
self.files_watched = 0
self.register = MagicMock(side_effect=self._register)
self.unregister = MagicMock(side_effect=self._unregister)
self.close = MagicMock()
self.get_map = MagicMock(side_effect=self._get_map)
self.select = MagicMock()
def _register(self, *args, **kwargs):
self.files_watched += 1
def _unregister(self, *args, **kwargs):
self.files_watched -= 1
def _get_map(self, *args, **kwargs):
return self.files_watched
@pytest.fixture
def mock_run_env(request, mocker):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn._send_initial_data = MagicMock()
conn._examine_output = MagicMock()
conn._terminate_process = MagicMock()
conn._load_name = 'ssh'
conn.sshpass_pipe = [MagicMock(), MagicMock()]
request.cls.pc = pc
request.cls.conn = conn
mock_popen_res = MagicMock()
mock_popen_res.poll = MagicMock()
mock_popen_res.wait = MagicMock()
mock_popen_res.stdin = MagicMock()
mock_popen_res.stdin.fileno.return_value = 1000
mock_popen_res.stdout = MagicMock()
mock_popen_res.stdout.fileno.return_value = 1001
mock_popen_res.stderr = MagicMock()
mock_popen_res.stderr.fileno.return_value = 1002
mock_popen_res.returncode = 0
request.cls.mock_popen_res = mock_popen_res
mock_popen = mocker.patch('subprocess.Popen', return_value=mock_popen_res)
request.cls.mock_popen = mock_popen
request.cls.mock_selector = MockSelector()
mocker.patch('ansible.module_utils.compat.selectors.DefaultSelector', lambda: request.cls.mock_selector)
request.cls.mock_openpty = mocker.patch('pty.openpty')
mocker.patch('fcntl.fcntl')
mocker.patch('os.write')
mocker.patch('os.close')
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRun(object):
# FIXME:
# These tests are little more than a smoketest. Need to enhance them
# a bit to check that they're calling the relevant functions and making
# complete coverage of the code paths
def test_no_escalation(self):
self.mock_popen_res.stdout.read.side_effect = [b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"my_stderr"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_with_password(self):
# test with a password set to trigger the sshpass write
self.pc.password = '12345'
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run(["ssh", "is", "a", "cmd"], "this is more data")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is more data'
def _password_with_prompt_examine_output(self, sourice, state, b_chunk, sudoable):
if state == 'awaiting_prompt':
self.conn._flags['become_prompt'] = True
elif state == 'awaiting_escalation':
self.conn._flags['become_success'] = True
return (b'', b'')
def test_password_with_prompt(self):
# test with password prompting enabled
self.pc.password = None
self.conn.become.prompt = b'Password:'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"Success", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ),
(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b''
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_password_with_become(self):
# test with some become settings
self.pc.prompt = b'Password:'
self.conn.become.prompt = b'Password:'
self.pc.become = True
self.pc.success_key = 'BECOME-SUCCESS-abcdefg'
self.conn.become._id = 'abcdefg'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"BECOME-SUCCESS-abcdefg", b"abc"]
self.mock_popen_res.stderr.read.side_effect = [b"123"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
self.mock_popen_res.stdin.flush.assert_called_once_with()
assert return_code == 0
assert b_stdout == b'abc'
assert b_stderr == b'123'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_pasword_without_data(self):
# simulate no data input but Popen using new pty's fails
self.mock_popen.return_value = None
self.mock_popen.side_effect = [OSError(), self.mock_popen_res]
# simulate no data input
self.mock_openpty.return_value = (98, 99)
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is False
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRetries(object):
def test_incorrect_password(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 5)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b'']
self.mock_popen_res.stderr.read.side_effect = [b'Permission denied, please try again.\r\n']
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[5] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = [b'sshpass', b'-d41', b'ssh', b'-C']
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
exception_info = pytest.raises(AnsibleAuthenticationFailure, self.conn.exec_command, 'sshpass', 'some data')
assert exception_info.value.message == ('Invalid/incorrect username/password. Skipping remaining 5 retries to prevent account lockout: '
'Permission denied, please try again.')
assert self.mock_popen.call_count == 1
def test_retry_then_success(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 3 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
return_code, b_stdout, b_stderr = self.conn.exec_command('ssh', 'some data')
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
def test_multiple_failures(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b""] * 10
self.mock_popen_res.stderr.read.side_effect = [b""] * 10
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 30)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
] * 10
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
pytest.raises(AnsibleConnectionFailure, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_abitrary_exceptions(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
self.mock_popen.side_effect = [Exception('bad')] * 10
pytest.raises(Exception, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_put_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.put_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
def test_fetch_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
changelogs/fragments/ssh_connection_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
import yaml
try:
import _yaml
HAS_LIBYAML = True
except ImportError:
HAS_LIBYAML = False
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value)
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}] ".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s" % ''.join(sys.version.splitlines()))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="verbose mode (-vvv for more, -vvvv to enable connection debugging)")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.config.get_config_value('PLAYBOOK_DIR'), dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory."
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
connect_group.add_argument('--ssh-common-args', default='', dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default='', dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default='', dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default='', dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
add_runas_prompt_options(parser, runas_group=runas_group)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is None:
runas_group = parser.add_argument_group("Privilege Escalation Options",
"control how and which user you become as on target hosts")
runas_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
parser.add_argument_group(runas_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
oracle: *rhelish
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
kwargs = {}
if PY3:
kwargs['inline_comment_prefixes'] = (';',)
self._parsers[cfile] = configparser.ConfigParser(**kwargs)
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
if value not in defs[config]['choices']:
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), defs[config]['choices']))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/playbook/play_context.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.compat.paramiko import paramiko
from ansible.module_utils.six import iteritems
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.plugins import get_plugin_class
from ansible.utils.display import Display
from ansible.plugins.loader import get_shell_plugin
from ansible.utils.ssh_functions import check_for_controlpersist
display = Display()
__all__ = ['PlayContext']
TASK_ATTRIBUTE_OVERRIDES = (
'become',
'become_user',
'become_pass',
'become_method',
'become_flags',
'connection',
'docker_extra_args', # TODO: remove
'delegate_to',
'no_log',
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_user',
'ansible_host',
'ansible_port',
# TODO: ???
'ansible_docker_extra_args',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_ssh_executable',
)
class PlayContext(Base):
'''
This class is used to consolidate the connection information for
hosts in a play and child tasks, where the task may override some
connection/authentication information.
'''
# base
_module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION)
_shell = FieldAttribute(isa='string')
_executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE)
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
_remote_addr = FieldAttribute(isa='string')
_password = FieldAttribute(isa='string')
_timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT)
_connection_user = FieldAttribute(isa='string')
_private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE)
_pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING)
# networking modules
_network_os = FieldAttribute(isa='string')
# docker FIXME: remove these
_docker_extra_args = FieldAttribute(isa='string')
# ssh # FIXME: remove these
_ssh_executable = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_EXECUTABLE)
_ssh_args = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_ARGS)
_ssh_common_args = FieldAttribute(isa='string')
_sftp_extra_args = FieldAttribute(isa='string')
_scp_extra_args = FieldAttribute(isa='string')
_ssh_extra_args = FieldAttribute(isa='string')
_ssh_transfer_method = FieldAttribute(isa='string', default=C.DEFAULT_SSH_TRANSFER_METHOD)
# ???
_connection_lockfd = FieldAttribute(isa='int')
# privilege escalation fields
_become = FieldAttribute(isa='bool')
_become_method = FieldAttribute(isa='string')
_become_user = FieldAttribute(isa='string')
_become_pass = FieldAttribute(isa='string')
_become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE)
_become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS)
_prompt = FieldAttribute(isa='string')
# general flags
_verbosity = FieldAttribute(isa='int', default=0)
_only_tags = FieldAttribute(isa='set', default=set)
_skip_tags = FieldAttribute(isa='set', default=set)
_start_at_task = FieldAttribute(isa='string')
_step = FieldAttribute(isa='bool', default=False)
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
_force_handlers = FieldAttribute(isa='bool', default=False)
def __init__(self, play=None, passwords=None, connection_lockfd=None):
# Note: play is really not optional. The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
super(PlayContext, self).__init__()
if passwords is None:
passwords = {}
self.password = passwords.get('conn_pass', '')
self.become_pass = passwords.get('become_pass', '')
self._become_plugin = None
self.prompt = ''
self.success_key = ''
# a file descriptor to be used during locking operations
self.connection_lockfd = connection_lockfd
# set options before play to allow play to override them
if context.CLIARGS:
self.set_attributes_from_cli()
if play:
self.set_attributes_from_play(play)
def set_attributes_from_plugin(self, plugin):
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
options = C.config.get_configuration_definitions(get_plugin_class(plugin), plugin._load_name)
for option in options:
if option:
flag = options[option].get('name')
if flag:
setattr(self, flag, self.connection.get_option(flag))
def set_attributes_from_play(self, play):
self.force_handlers = play.force_handlers
def set_attributes_from_cli(self):
'''
Configures this connection information instance with data from
options specified by the user on the command line. These have a
lower precedence than those set on the play or host.
'''
if context.CLIARGS.get('timeout', False):
self.timeout = int(context.CLIARGS['timeout'])
# From the command line. These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
self.private_key_file = context.CLIARGS.get('private_key_file') # Else default
self.verbosity = context.CLIARGS.get('verbosity') # Else default
self.ssh_common_args = context.CLIARGS.get('ssh_common_args') # Else default
self.ssh_extra_args = context.CLIARGS.get('ssh_extra_args') # Else default
self.sftp_extra_args = context.CLIARGS.get('sftp_extra_args') # Else default
self.scp_extra_args = context.CLIARGS.get('scp_extra_args') # Else default
# Not every cli that uses PlayContext has these command line args so have a default
self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default
def set_task_and_variable_override(self, task, variables, templar):
'''
Sets attributes from the task if they are set, which will override
those from the play.
:arg task: the task object with the parameters that were set on it
:arg variables: variables from inventory
:arg templar: templar instance if templating variables is needed
'''
new_info = self.copy()
# loop through a subset of attributes on the task object and set
# connection fields based on their values
for attr in TASK_ATTRIBUTE_OVERRIDES:
if hasattr(task, attr):
attr_val = getattr(task, attr)
if attr_val is not None:
setattr(new_info, attr, attr_val)
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
if task.delegate_to is not None:
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'):
if address_var in delegated_vars:
break
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars and delegated_vars[user_var]:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
# setup shell
for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'):
if exe_var in variables:
setattr(new_info, 'executable', variables.get(exe_var))
attrs_considered = []
for (attr, variable_names) in iteritems(C.MAGIC_VARIABLE_MAPPING):
for variable_name in variable_names:
if attr in attrs_considered:
continue
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
if task.delegate_to is not None:
if isinstance(delegated_vars, dict) and variable_name in delegated_vars:
setattr(new_info, attr, delegated_vars[variable_name])
attrs_considered.append(attr)
elif variable_name in variables:
setattr(new_info, attr, variables[variable_name])
attrs_considered.append(attr)
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'):
if become_pass_name in variables:
break
# make sure we get port defaults if needed
if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None:
new_info.port = int(C.DEFAULT_REMOTE_PORT)
# special overrides for the connection setting
if len(delegated_vars) > 0:
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if connection_type in delegated_vars:
break
else:
remote_addr_local = new_info.remote_addr in C.LOCALHOST
inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST
if remote_addr_local and inv_hostname_local:
setattr(new_info, 'connection', 'local')
elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local):
setattr(new_info, 'connection', C.DEFAULT_TRANSPORT)
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
if new_info.connection == 'local':
if not new_info.connection_user:
new_info.connection_user = new_info.remote_user
# set no_log to default if it was not previously set
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
if task.check_mode is not None:
new_info.check_mode = task.check_mode
if task.diff is not None:
new_info.diff = task.diff
return new_info
def set_become_plugin(self, plugin):
self._become_plugin = plugin
def make_become_cmd(self, cmd, executable=None):
""" helper function to create privilege escalation commands """
display.deprecated(
"PlayContext.make_become_cmd should not be used, the calling code should be using become plugins instead",
version="2.12", collection_name='ansible.builtin'
)
if not cmd or not self.become:
return cmd
become_method = self.become_method
# load/call become plugins here
plugin = self._become_plugin
if plugin:
options = {
'become_exe': self.become_exe or become_method,
'become_flags': self.become_flags or '',
'become_user': self.become_user,
'become_pass': self.become_pass
}
plugin.set_options(direct=options)
if not executable:
executable = self.executable
shell = get_shell_plugin(executable=executable)
cmd = plugin.build_become_command(cmd, shell)
# for backwards compat:
if self.become_pass:
self.prompt = plugin.prompt
else:
raise AnsibleError("Privilege escalation method not found: %s" % become_method)
return cmd
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
try:
if 'become' in prop:
continue
var_val = getattr(self, prop)
for var_opt in var_list:
if var_opt not in variables and var_val is not None:
variables[var_opt] = var_val
except AttributeError:
continue
def _get_attr_connection(self):
''' connections are special, this takes care of responding correctly '''
conn_type = None
if self._attributes['connection'] == 'smart':
conn_type = 'ssh'
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(self.ssh_executable) and paramiko is not None:
conn_type = "paramiko"
# if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems
elif self._attributes['connection'] == 'persistent' and paramiko is not None:
conn_type = 'paramiko'
if conn_type:
self.connection = conn_type
return self._attributes['connection']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
options:
host:
description: Hostname/ip to connect to.
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_ssh_host
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description: Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
retries:
# constant: ANSIBLE_SSH_RETRIES
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(C.ANSIBLE_SSH_RETRIES) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = C.ANSIBLE_SSH_CONTROL_PATH
self.control_path_dir = C.ANSIBLE_SSH_CONTROL_PATH_DIR
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
if conn_password:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
#
# Next, we add [ssh_connection]ssh_args from ansible.cfg.
#
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments controlled by configuration file settings
# (e.g. host_key_checking) or inventory variables (ansible_ssh_port) or
# a combination thereof.
if not C.HOST_KEY_CHECKING:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
if self._play_context.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self._play_context.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self._play_context.private_key_file
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
user = self._play_context.remote_user
if user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self._play_context.remote_user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(self._play_context.timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = getattr(self._play_context, opt, None)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"PlayContext set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self._play_context.timeout
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# filehandles is low, then switch to more efficient ones when higher.
# select is faster when filehandles is low.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if C.HOST_KEY_CHECKING:
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self._play_context.ssh_transfer_method
if ssh_transfer_method is not None:
if not (ssh_transfer_method in ('smart', 'sftp', 'scp', 'piped')):
raise AnsibleOptionsError('transfer_method needs to be one of [smart|sftp|scp|piped]')
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = C.DEFAULT_SCP_IF_SSH
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable') or self._play_context.ssh_executable
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
cmd = self._build_command(self.get_option('ssh_executable') or self._play_context.ssh_executable, 'ssh', '-O', 'stop', self.host)
controlpersist, controlpath = self._persistence_controls(cmd)
cp_arg = [a for a in cmd if a.startswith(b"ControlPath=")]
# only run the reset if the ControlPath already exists or if it isn't
# configured and ControlPersist is set
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
if run_reset:
display.vvv(u'sending stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/utils/ssh_functions.py
|
# (c) 2016, James Tanner
# (c) 2016, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import subprocess
from ansible import constants as C
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat.paramiko import paramiko
_HAS_CONTROLPERSIST = {}
def check_for_controlpersist(ssh_executable):
try:
# If we've already checked this executable
return _HAS_CONTROLPERSIST[ssh_executable]
except KeyError:
pass
b_ssh_exec = to_bytes(ssh_executable, errors='surrogate_or_strict')
has_cp = True
try:
cmd = subprocess.Popen([b_ssh_exec, '-o', 'ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if b"Bad configuration option" in err or b"Usage:" in err:
has_cp = False
except OSError:
has_cp = False
_HAS_CONTROLPERSIST[ssh_executable] = has_cp
return has_cp
def set_default_transport():
# deal with 'smart' connection .. one time ..
if C.DEFAULT_TRANSPORT == 'smart':
# TODO: check if we can deprecate this as ssh w/o control persist should
# not be as common anymore.
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(C.ANSIBLE_SSH_EXECUTABLE) and paramiko is not None:
C.DEFAULT_TRANSPORT = "paramiko"
else:
C.DEFAULT_TRANSPORT = "ssh"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
test/integration/targets/connection_windows_ssh/runme.sh
|
#!/usr/bin/env bash
set -eux
# We need to run these tests with both the powershell and cmd shell type
### cmd tests - no DefaultShell set ###
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=cmd" \
"$@"
# https://github.com/PowerShell/Win32-OpenSSH/wiki/DefaultShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell state=absent" \
"$@"
# Need to flush the connection to ensure we get a new shell for the next tests
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
# sftp
./windows.sh "$@"
# scp
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
# other tests not part of the generic connection test framework
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
### powershell tests - explicit DefaultShell set ###
# we do this last as the default shell on our CI instances is set to PowerShell
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=powershell" \
"$@"
# ensure the default shell is set to PowerShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell data=C:\\\\Windows\\\\System32\\\\WindowsPowerShell\\\\v1.0\\\\powershell.exe" \
"$@"
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
./windows.sh "$@"
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,414 |
meta: reset_connection not working on Ansible 2.9.1
|
##### SUMMARY
I add a user to a group, execute the meta module reset_connection, check the group association again, just to find that the connection is still the same. And it's still the same ssh session!
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Meta Module : reset_connection
##### ANSIBLE VERSION
ansible 2.9.1 config file = /workspace/ansible_test_2@2/ansible/ansible.cfg configured module search path = [u'/var/jenkins_home/.ansible/plugins/modules', u'/usr/shar e/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5- 39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible_test_2@2/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m ANSIBLE_SSH_CONTROL_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = %(directory)s/%%h-%%p-%%r DEFAULT_HOST_LIST(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/environmen_data'] DEFAULT_REMOTE_USER(/workspace/ansible_test_2@2/ansible/ansible.cfg) = USER DEFAULT_ROLES_PATH(/workspace/ansible_test_2@2/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles', u'/${WORKSPACE}/ansible/roles', u'/workspace/ansible_test_2@2/aHOST_KEY_CHECKING(/workspace/ansible_test_2@2/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Targed OS = Red Hat Enterprise Linux Server release 7.7 (Maipo) running a normal docker-ce installation
Source OS = Docker Container based on a CentOS v5 image, Ansible has been installed manually and it works perfectly.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check group association with id shell command, change group association using the normal ansible module -> Check id shell command again.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Id Check 1
shell: id
register: id_output
- name: Give output of Id Check 1
debug:
msg: '{{ id_output.stdout }}'
- name: Modify User User group association
user:
name: '{{ ansible_user }}'
groups: docker
append: true
state: present
- name: reset the SSH_CONNECTION!!!!
meta: reset_connection
# Optional , try a docker command
- name: try a docker ps command
shell: docker ps
register: docker_output
ignore_errors: true
- name: give output of docker command
debug:
msg: '{{ docker_output.stdout }}'
- name: Id Check 2
shell: id
register: id_output_2
- name: Give output of Id Check 2
debug:
msg: '{{ id_output_2.stdout }}'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The ansible user is added to the docker group, SSH Connection is resetted so that group association is updated. And the second ID command reflects the new group association.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The session is not interrupted, so that the current session is not updated with the new Group ! Thus i cannot use a docker command.
<!--- Paste verbatim command output between quotes -->
```paste below
The -vvv run of the playbook runs with the META task . META: reset connection
So maybe i'm missing something, But i cannot find any indicator that it's the problem with my config file or so.
```
|
https://github.com/ansible/ansible/issues/66414
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2020-01-13T13:15:04Z |
python
| 2021-03-03T20:25:16Z |
test/units/plugins/connection/test_ssh.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from io import StringIO
import pytest
from ansible import constants as C
from ansible.errors import AnsibleAuthenticationFailure
from units.compat import unittest
from units.compat.mock import patch, MagicMock, PropertyMock
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.compat.selectors import SelectorKey, EVENT_READ
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import ssh
from ansible.plugins.loader import connection_loader, become_loader
class TestConnectionBaseClass(unittest.TestCase):
def test_plugins_connection_ssh_module(self):
play_context = PlayContext()
play_context.prompt = (
'[sudo via ansible, key=ouzmdnewuhucvuaabtjmweasarviygqq] password: '
)
in_stream = StringIO()
self.assertIsInstance(ssh.Connection(play_context, in_stream), ssh.Connection)
def test_plugins_connection_ssh_basic(self):
pc = PlayContext()
new_stdin = StringIO()
conn = ssh.Connection(pc, new_stdin)
# connect just returns self, so assert that
res = conn._connect()
self.assertEqual(conn, res)
ssh.SSHPASS_AVAILABLE = False
self.assertFalse(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = True
self.assertTrue(conn._sshpass_available())
with patch('subprocess.Popen') as p:
ssh.SSHPASS_AVAILABLE = None
p.return_value = MagicMock()
self.assertTrue(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = None
p.return_value = None
p.side_effect = OSError()
self.assertFalse(conn._sshpass_available())
conn.close()
self.assertFalse(conn._connected)
def test_plugins_connection_ssh__build_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command('ssh', 'ssh')
def test_plugins_connection_ssh_exec_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._build_command.return_value = 'ssh something something'
conn._run = MagicMock()
conn._run.return_value = (0, 'stdout', 'stderr')
conn.get_option = MagicMock()
conn.get_option.return_value = True
res, stdout, stderr = conn.exec_command('ssh')
res, stdout, stderr = conn.exec_command('ssh', 'this is some data')
def test_plugins_connection_ssh__examine_output(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn.check_password_prompt = MagicMock()
conn.check_become_success = MagicMock()
conn.check_incorrect_password = MagicMock()
conn.check_missing_password = MagicMock()
def _check_password_prompt(line):
if b'foo' in line:
return True
return False
def _check_become_success(line):
if b'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz' in line:
return True
return False
def _check_incorrect_password(line):
if b'incorrect password' in line:
return True
return False
def _check_missing_password(line):
if b'bad password' in line:
return True
return False
conn.become.check_password_prompt = MagicMock(side_effect=_check_password_prompt)
conn.become.check_become_success = MagicMock(side_effect=_check_become_success)
conn.become.check_incorrect_password = MagicMock(side_effect=_check_incorrect_password)
conn.become.check_missing_password = MagicMock(side_effect=_check_missing_password)
# test examining output for prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = True
conn.become.prompt = True
def get_option(option):
if option == 'become_pass':
return 'password'
return None
conn.become.get_option = get_option
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nfoo\nline 3\nthis should be the remainder', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'this should be the remainder')
self.assertTrue(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
conn.become.success = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nBECOME-SUCCESS-abcdefghijklmnopqrstuvxyz\nline 3\n', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertTrue(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become failure
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nincorrect password\n', True)
self.assertEqual(output, b'line 1\nline 2\nincorrect password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertTrue(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for missing password
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nbad password\n', True)
self.assertEqual(output, b'line 1\nbad password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertTrue(conn._flags['become_nopasswd_error'])
@patch('time.sleep')
@patch('os.path.exists')
def test_plugins_connection_ssh_put_file(self, mock_ospe, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
mock_ospe.return_value = True
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'put',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
# test that a not-found path raises an error
mock_ospe.return_value = False
conn._bare_run.return_value = (0, 'stdout', '')
self.assertRaises(AnsibleFileNotFound, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
@patch('time.sleep')
def test_plugins_connection_ssh_fetch_file(self, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
conn._load_name = 'ssh'
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.set_options({})
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'get',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.fetch_file, '/path/to/bad/file', '/remote/path/to/file')
class MockSelector(object):
def __init__(self):
self.files_watched = 0
self.register = MagicMock(side_effect=self._register)
self.unregister = MagicMock(side_effect=self._unregister)
self.close = MagicMock()
self.get_map = MagicMock(side_effect=self._get_map)
self.select = MagicMock()
def _register(self, *args, **kwargs):
self.files_watched += 1
def _unregister(self, *args, **kwargs):
self.files_watched -= 1
def _get_map(self, *args, **kwargs):
return self.files_watched
@pytest.fixture
def mock_run_env(request, mocker):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn._send_initial_data = MagicMock()
conn._examine_output = MagicMock()
conn._terminate_process = MagicMock()
conn._load_name = 'ssh'
conn.sshpass_pipe = [MagicMock(), MagicMock()]
request.cls.pc = pc
request.cls.conn = conn
mock_popen_res = MagicMock()
mock_popen_res.poll = MagicMock()
mock_popen_res.wait = MagicMock()
mock_popen_res.stdin = MagicMock()
mock_popen_res.stdin.fileno.return_value = 1000
mock_popen_res.stdout = MagicMock()
mock_popen_res.stdout.fileno.return_value = 1001
mock_popen_res.stderr = MagicMock()
mock_popen_res.stderr.fileno.return_value = 1002
mock_popen_res.returncode = 0
request.cls.mock_popen_res = mock_popen_res
mock_popen = mocker.patch('subprocess.Popen', return_value=mock_popen_res)
request.cls.mock_popen = mock_popen
request.cls.mock_selector = MockSelector()
mocker.patch('ansible.module_utils.compat.selectors.DefaultSelector', lambda: request.cls.mock_selector)
request.cls.mock_openpty = mocker.patch('pty.openpty')
mocker.patch('fcntl.fcntl')
mocker.patch('os.write')
mocker.patch('os.close')
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRun(object):
# FIXME:
# These tests are little more than a smoketest. Need to enhance them
# a bit to check that they're calling the relevant functions and making
# complete coverage of the code paths
def test_no_escalation(self):
self.mock_popen_res.stdout.read.side_effect = [b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"my_stderr"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_with_password(self):
# test with a password set to trigger the sshpass write
self.pc.password = '12345'
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run(["ssh", "is", "a", "cmd"], "this is more data")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is more data'
def _password_with_prompt_examine_output(self, sourice, state, b_chunk, sudoable):
if state == 'awaiting_prompt':
self.conn._flags['become_prompt'] = True
elif state == 'awaiting_escalation':
self.conn._flags['become_success'] = True
return (b'', b'')
def test_password_with_prompt(self):
# test with password prompting enabled
self.pc.password = None
self.conn.become.prompt = b'Password:'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"Success", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ),
(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b''
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_password_with_become(self):
# test with some become settings
self.pc.prompt = b'Password:'
self.conn.become.prompt = b'Password:'
self.pc.become = True
self.pc.success_key = 'BECOME-SUCCESS-abcdefg'
self.conn.become._id = 'abcdefg'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"BECOME-SUCCESS-abcdefg", b"abc"]
self.mock_popen_res.stderr.read.side_effect = [b"123"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
self.mock_popen_res.stdin.flush.assert_called_once_with()
assert return_code == 0
assert b_stdout == b'abc'
assert b_stderr == b'123'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_pasword_without_data(self):
# simulate no data input but Popen using new pty's fails
self.mock_popen.return_value = None
self.mock_popen.side_effect = [OSError(), self.mock_popen_res]
# simulate no data input
self.mock_openpty.return_value = (98, 99)
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is False
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRetries(object):
def test_incorrect_password(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 5)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b'']
self.mock_popen_res.stderr.read.side_effect = [b'Permission denied, please try again.\r\n']
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[5] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = [b'sshpass', b'-d41', b'ssh', b'-C']
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
exception_info = pytest.raises(AnsibleAuthenticationFailure, self.conn.exec_command, 'sshpass', 'some data')
assert exception_info.value.message == ('Invalid/incorrect username/password. Skipping remaining 5 retries to prevent account lockout: '
'Permission denied, please try again.')
assert self.mock_popen.call_count == 1
def test_retry_then_success(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 3 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
return_code, b_stdout, b_stderr = self.conn.exec_command('ssh', 'some data')
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
def test_multiple_failures(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b""] * 10
self.mock_popen_res.stderr.read.side_effect = [b""] * 10
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 30)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
] * 10
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
pytest.raises(AnsibleConnectionFailure, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_abitrary_exceptions(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
self.mock_popen.side_effect = [Exception('bad')] * 10
pytest.raises(Exception, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_put_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.put_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
def test_fetch_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
changelogs/fragments/ssh_connection_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
import yaml
try:
import _yaml
HAS_LIBYAML = True
except ImportError:
HAS_LIBYAML = False
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value)
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}] ".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s" % ''.join(sys.version.splitlines()))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="verbose mode (-vvv for more, -vvvv to enable connection debugging)")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.config.get_config_value('PLAYBOOK_DIR'), dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory."
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
connect_group.add_argument('--ssh-common-args', default='', dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default='', dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default='', dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default='', dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
add_runas_prompt_options(parser, runas_group=runas_group)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is None:
runas_group = parser.add_argument_group("Privilege Escalation Options",
"control how and which user you become as on target hosts")
runas_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
parser.add_argument_group(runas_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
oracle: *rhelish
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
kwargs = {}
if PY3:
kwargs['inline_comment_prefixes'] = (';',)
self._parsers[cfile] = configparser.ConfigParser(**kwargs)
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
if value not in defs[config]['choices']:
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), defs[config]['choices']))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/playbook/play_context.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.compat.paramiko import paramiko
from ansible.module_utils.six import iteritems
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.plugins import get_plugin_class
from ansible.utils.display import Display
from ansible.plugins.loader import get_shell_plugin
from ansible.utils.ssh_functions import check_for_controlpersist
display = Display()
__all__ = ['PlayContext']
TASK_ATTRIBUTE_OVERRIDES = (
'become',
'become_user',
'become_pass',
'become_method',
'become_flags',
'connection',
'docker_extra_args', # TODO: remove
'delegate_to',
'no_log',
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_user',
'ansible_host',
'ansible_port',
# TODO: ???
'ansible_docker_extra_args',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_ssh_executable',
)
class PlayContext(Base):
'''
This class is used to consolidate the connection information for
hosts in a play and child tasks, where the task may override some
connection/authentication information.
'''
# base
_module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION)
_shell = FieldAttribute(isa='string')
_executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE)
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
_remote_addr = FieldAttribute(isa='string')
_password = FieldAttribute(isa='string')
_timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT)
_connection_user = FieldAttribute(isa='string')
_private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE)
_pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING)
# networking modules
_network_os = FieldAttribute(isa='string')
# docker FIXME: remove these
_docker_extra_args = FieldAttribute(isa='string')
# ssh # FIXME: remove these
_ssh_executable = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_EXECUTABLE)
_ssh_args = FieldAttribute(isa='string', default=C.ANSIBLE_SSH_ARGS)
_ssh_common_args = FieldAttribute(isa='string')
_sftp_extra_args = FieldAttribute(isa='string')
_scp_extra_args = FieldAttribute(isa='string')
_ssh_extra_args = FieldAttribute(isa='string')
_ssh_transfer_method = FieldAttribute(isa='string', default=C.DEFAULT_SSH_TRANSFER_METHOD)
# ???
_connection_lockfd = FieldAttribute(isa='int')
# privilege escalation fields
_become = FieldAttribute(isa='bool')
_become_method = FieldAttribute(isa='string')
_become_user = FieldAttribute(isa='string')
_become_pass = FieldAttribute(isa='string')
_become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE)
_become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS)
_prompt = FieldAttribute(isa='string')
# general flags
_verbosity = FieldAttribute(isa='int', default=0)
_only_tags = FieldAttribute(isa='set', default=set)
_skip_tags = FieldAttribute(isa='set', default=set)
_start_at_task = FieldAttribute(isa='string')
_step = FieldAttribute(isa='bool', default=False)
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
_force_handlers = FieldAttribute(isa='bool', default=False)
def __init__(self, play=None, passwords=None, connection_lockfd=None):
# Note: play is really not optional. The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
super(PlayContext, self).__init__()
if passwords is None:
passwords = {}
self.password = passwords.get('conn_pass', '')
self.become_pass = passwords.get('become_pass', '')
self._become_plugin = None
self.prompt = ''
self.success_key = ''
# a file descriptor to be used during locking operations
self.connection_lockfd = connection_lockfd
# set options before play to allow play to override them
if context.CLIARGS:
self.set_attributes_from_cli()
if play:
self.set_attributes_from_play(play)
def set_attributes_from_plugin(self, plugin):
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
options = C.config.get_configuration_definitions(get_plugin_class(plugin), plugin._load_name)
for option in options:
if option:
flag = options[option].get('name')
if flag:
setattr(self, flag, self.connection.get_option(flag))
def set_attributes_from_play(self, play):
self.force_handlers = play.force_handlers
def set_attributes_from_cli(self):
'''
Configures this connection information instance with data from
options specified by the user on the command line. These have a
lower precedence than those set on the play or host.
'''
if context.CLIARGS.get('timeout', False):
self.timeout = int(context.CLIARGS['timeout'])
# From the command line. These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
self.private_key_file = context.CLIARGS.get('private_key_file') # Else default
self.verbosity = context.CLIARGS.get('verbosity') # Else default
self.ssh_common_args = context.CLIARGS.get('ssh_common_args') # Else default
self.ssh_extra_args = context.CLIARGS.get('ssh_extra_args') # Else default
self.sftp_extra_args = context.CLIARGS.get('sftp_extra_args') # Else default
self.scp_extra_args = context.CLIARGS.get('scp_extra_args') # Else default
# Not every cli that uses PlayContext has these command line args so have a default
self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default
def set_task_and_variable_override(self, task, variables, templar):
'''
Sets attributes from the task if they are set, which will override
those from the play.
:arg task: the task object with the parameters that were set on it
:arg variables: variables from inventory
:arg templar: templar instance if templating variables is needed
'''
new_info = self.copy()
# loop through a subset of attributes on the task object and set
# connection fields based on their values
for attr in TASK_ATTRIBUTE_OVERRIDES:
if hasattr(task, attr):
attr_val = getattr(task, attr)
if attr_val is not None:
setattr(new_info, attr, attr_val)
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
if task.delegate_to is not None:
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'):
if address_var in delegated_vars:
break
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars and delegated_vars[user_var]:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
# setup shell
for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'):
if exe_var in variables:
setattr(new_info, 'executable', variables.get(exe_var))
attrs_considered = []
for (attr, variable_names) in iteritems(C.MAGIC_VARIABLE_MAPPING):
for variable_name in variable_names:
if attr in attrs_considered:
continue
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
if task.delegate_to is not None:
if isinstance(delegated_vars, dict) and variable_name in delegated_vars:
setattr(new_info, attr, delegated_vars[variable_name])
attrs_considered.append(attr)
elif variable_name in variables:
setattr(new_info, attr, variables[variable_name])
attrs_considered.append(attr)
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'):
if become_pass_name in variables:
break
# make sure we get port defaults if needed
if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None:
new_info.port = int(C.DEFAULT_REMOTE_PORT)
# special overrides for the connection setting
if len(delegated_vars) > 0:
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if connection_type in delegated_vars:
break
else:
remote_addr_local = new_info.remote_addr in C.LOCALHOST
inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST
if remote_addr_local and inv_hostname_local:
setattr(new_info, 'connection', 'local')
elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local):
setattr(new_info, 'connection', C.DEFAULT_TRANSPORT)
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
if new_info.connection == 'local':
if not new_info.connection_user:
new_info.connection_user = new_info.remote_user
# set no_log to default if it was not previously set
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
if task.check_mode is not None:
new_info.check_mode = task.check_mode
if task.diff is not None:
new_info.diff = task.diff
return new_info
def set_become_plugin(self, plugin):
self._become_plugin = plugin
def make_become_cmd(self, cmd, executable=None):
""" helper function to create privilege escalation commands """
display.deprecated(
"PlayContext.make_become_cmd should not be used, the calling code should be using become plugins instead",
version="2.12", collection_name='ansible.builtin'
)
if not cmd or not self.become:
return cmd
become_method = self.become_method
# load/call become plugins here
plugin = self._become_plugin
if plugin:
options = {
'become_exe': self.become_exe or become_method,
'become_flags': self.become_flags or '',
'become_user': self.become_user,
'become_pass': self.become_pass
}
plugin.set_options(direct=options)
if not executable:
executable = self.executable
shell = get_shell_plugin(executable=executable)
cmd = plugin.build_become_command(cmd, shell)
# for backwards compat:
if self.become_pass:
self.prompt = plugin.prompt
else:
raise AnsibleError("Privilege escalation method not found: %s" % become_method)
return cmd
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
try:
if 'become' in prop:
continue
var_val = getattr(self, prop)
for var_opt in var_list:
if var_opt not in variables and var_val is not None:
variables[var_opt] = var_val
except AttributeError:
continue
def _get_attr_connection(self):
''' connections are special, this takes care of responding correctly '''
conn_type = None
if self._attributes['connection'] == 'smart':
conn_type = 'ssh'
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(self.ssh_executable) and paramiko is not None:
conn_type = "paramiko"
# if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems
elif self._attributes['connection'] == 'persistent' and paramiko is not None:
conn_type = 'paramiko'
if conn_type:
self.connection = conn_type
return self._attributes['connection']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
- Ansible does not expose a channel to allow communication between the user and the ssh process to accept
a password manually to decrypt an ssh key when using this connection plugin (which is the default). The
use of ``ssh-agent`` is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
options:
host:
description: Hostname/ip to connect to.
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_ssh_host
host_key_checking:
description: Determines if ssh should check host keys
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description: Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all ssh CLI tools
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
ssh_executable:
default: ssh
description:
- This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the ``scp`` CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
sftp_extra_args:
description: Extra exclusive to the ``sftp`` CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
retries:
# constant: ANSIBLE_SSH_RETRIES
description: Number of attempts to connect.
default: 3
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
control_path:
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
scp_if_ssh:
default: smart
description:
- "Preferred method to use when transfering files over ssh"
- When set to smart, Ansible will try them until one succeeds or they all fail
- If set to True, it will force 'scp', if False it will use 'sftp'
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(C.ANSIBLE_SSH_RETRIES) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = C.ANSIBLE_SSH_CONTROL_PATH
self.control_path_dir = C.ANSIBLE_SSH_CONTROL_PATH_DIR
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
if conn_password:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
#
# Next, we add [ssh_connection]ssh_args from ansible.cfg.
#
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments controlled by configuration file settings
# (e.g. host_key_checking) or inventory variables (ansible_ssh_port) or
# a combination thereof.
if not C.HOST_KEY_CHECKING:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
if self._play_context.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self._play_context.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self._play_context.private_key_file
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
user = self._play_context.remote_user
if user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self._play_context.remote_user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(self._play_context.timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = getattr(self._play_context, opt, None)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"PlayContext set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self._play_context.timeout
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# filehandles is low, then switch to more efficient ones when higher.
# select is faster when filehandles is low.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if C.HOST_KEY_CHECKING:
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self._play_context.ssh_transfer_method
if ssh_transfer_method is not None:
if not (ssh_transfer_method in ('smart', 'sftp', 'scp', 'piped')):
raise AnsibleOptionsError('transfer_method needs to be one of [smart|sftp|scp|piped]')
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = C.DEFAULT_SCP_IF_SSH
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable') or self._play_context.ssh_executable
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
cmd = self._build_command(self.get_option('ssh_executable') or self._play_context.ssh_executable, 'ssh', '-O', 'stop', self.host)
controlpersist, controlpath = self._persistence_controls(cmd)
cp_arg = [a for a in cmd if a.startswith(b"ControlPath=")]
# only run the reset if the ControlPath already exists or if it isn't
# configured and ControlPersist is set
run_reset = False
if controlpersist and len(cp_arg) > 0:
cp_path = cp_arg[0].split(b"=", 1)[-1]
if os.path.exists(cp_path):
run_reset = True
elif controlpersist:
run_reset = True
if run_reset:
display.vvv(u'sending stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
lib/ansible/utils/ssh_functions.py
|
# (c) 2016, James Tanner
# (c) 2016, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import subprocess
from ansible import constants as C
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat.paramiko import paramiko
_HAS_CONTROLPERSIST = {}
def check_for_controlpersist(ssh_executable):
try:
# If we've already checked this executable
return _HAS_CONTROLPERSIST[ssh_executable]
except KeyError:
pass
b_ssh_exec = to_bytes(ssh_executable, errors='surrogate_or_strict')
has_cp = True
try:
cmd = subprocess.Popen([b_ssh_exec, '-o', 'ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if b"Bad configuration option" in err or b"Usage:" in err:
has_cp = False
except OSError:
has_cp = False
_HAS_CONTROLPERSIST[ssh_executable] = has_cp
return has_cp
def set_default_transport():
# deal with 'smart' connection .. one time ..
if C.DEFAULT_TRANSPORT == 'smart':
# TODO: check if we can deprecate this as ssh w/o control persist should
# not be as common anymore.
# see if SSH can support ControlPersist if not use paramiko
if not check_for_controlpersist(C.ANSIBLE_SSH_EXECUTABLE) and paramiko is not None:
C.DEFAULT_TRANSPORT = "paramiko"
else:
C.DEFAULT_TRANSPORT = "ssh"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
test/integration/targets/connection_windows_ssh/runme.sh
|
#!/usr/bin/env bash
set -eux
# We need to run these tests with both the powershell and cmd shell type
### cmd tests - no DefaultShell set ###
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=cmd" \
"$@"
# https://github.com/PowerShell/Win32-OpenSSH/wiki/DefaultShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell state=absent" \
"$@"
# Need to flush the connection to ensure we get a new shell for the next tests
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
# sftp
./windows.sh "$@"
# scp
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
# other tests not part of the generic connection test framework
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
### powershell tests - explicit DefaultShell set ###
# we do this last as the default shell on our CI instances is set to PowerShell
ansible -i ../../inventory.winrm localhost \
-m template \
-a "src=test_connection.inventory.j2 dest=${OUTPUT_DIR}/test_connection.inventory" \
-e "test_shell_type=powershell" \
"$@"
# ensure the default shell is set to PowerShell
ansible -i ../../inventory.winrm windows \
-m win_regedit \
-a "path=HKLM:\\\\SOFTWARE\\\\OpenSSH name=DefaultShell data=C:\\\\Windows\\\\System32\\\\WindowsPowerShell\\\\v1.0\\\\powershell.exe" \
"$@"
ansible -i "${OUTPUT_DIR}/test_connection.inventory" windows \
-m meta -a "reset_connection" \
"$@"
./windows.sh "$@"
ANSIBLE_SCP_IF_SSH=true ./windows.sh "$@"
ansible-playbook -i "${OUTPUT_DIR}/test_connection.inventory" tests.yml \
"$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,184 |
`meta: reset_connection` ignoring variables
|
##### SUMMARY
meta reset_connection ignores the connection variables specified within host or group vars. Therefore connections fail with some connectors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/utilities/helper/meta.py`
`lib/ansible/plugins/connection/vmware_tools.py`
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/PycharmProjects/ansible-2/lib/ansible
executable location = /home/user/PycharmProjects/ansible-2/bin/ansible
python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ArchLinux
##### STEPS TO REPRODUCE
Trying to use reset_connection with e.g. vmware_tools connector fail as meta does not forward required variables, like ansible does for normal connections.
```yaml
- name: reset connection
meta: reset_connection
```
##### EXPECTED RESULTS
successfully reconnecting to the vm (e.g. for credential switching)
##### ACTUAL RESULTS
```paste below
ERROR! Unexpected Exception, this is probably a bug: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
the full traceback was:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 381, in get_config_value
keys=keys, variables=variables, direct=direct)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/config/manager.py", line 456, in get_config_value_and_origin
to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/ansible-2/bin/ansible-playbook", line 111, in <module>
exit_code = cli.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/cli/playbook.py", line 121, in run
results = pbex.run()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/executor/task_queue_manager.py", line 239, in run
play_return = strategy.run(iterator, play_context)
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/linear.py", line 263, in run
results.extend(self._execute_meta(task, play_context, iterator, host))
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/strategy/__init__.py", line 1068, in _execute_meta
connection.reset()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 361, in reset
self._connect()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 344, in _connect
self._establish_connection()
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 286, in _establish_connection
"host": self.vmware_host,
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/connection/vmware_tools.py", line 246, in vmware_host
return self.get_option("vmware_host")
File "/home/user/PycharmProjects/ansible-2/lib/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'No setting was provided for required configuration plugin_type: connection plugin: vmware_tools setting: vmware_host '
```
##### Notes
There are currently also other issues that prevent reset_connection from working with the vmware_tools connector, but they are addressed within separate issues and are out of scope for this one. This issue is about the required variables not being passed to connectors by the meta helper, this may be an issue for other connectors, too.
|
https://github.com/ansible/ansible/issues/58184
|
https://github.com/ansible/ansible/pull/73708
|
43300e22798e4c9bd8ec2e321d28c5e8d2018aeb
|
935528e22e5283ee3f63a8772830d3d01f55ed8c
| 2019-06-21T12:06:45Z |
python
| 2021-03-03T20:25:16Z |
test/units/plugins/connection/test_ssh.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from io import StringIO
import pytest
from ansible import constants as C
from ansible.errors import AnsibleAuthenticationFailure
from units.compat import unittest
from units.compat.mock import patch, MagicMock, PropertyMock
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.compat.selectors import SelectorKey, EVENT_READ
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import ssh
from ansible.plugins.loader import connection_loader, become_loader
class TestConnectionBaseClass(unittest.TestCase):
def test_plugins_connection_ssh_module(self):
play_context = PlayContext()
play_context.prompt = (
'[sudo via ansible, key=ouzmdnewuhucvuaabtjmweasarviygqq] password: '
)
in_stream = StringIO()
self.assertIsInstance(ssh.Connection(play_context, in_stream), ssh.Connection)
def test_plugins_connection_ssh_basic(self):
pc = PlayContext()
new_stdin = StringIO()
conn = ssh.Connection(pc, new_stdin)
# connect just returns self, so assert that
res = conn._connect()
self.assertEqual(conn, res)
ssh.SSHPASS_AVAILABLE = False
self.assertFalse(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = True
self.assertTrue(conn._sshpass_available())
with patch('subprocess.Popen') as p:
ssh.SSHPASS_AVAILABLE = None
p.return_value = MagicMock()
self.assertTrue(conn._sshpass_available())
ssh.SSHPASS_AVAILABLE = None
p.return_value = None
p.side_effect = OSError()
self.assertFalse(conn._sshpass_available())
conn.close()
self.assertFalse(conn._connected)
def test_plugins_connection_ssh__build_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command('ssh', 'ssh')
def test_plugins_connection_ssh_exec_command(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._build_command.return_value = 'ssh something something'
conn._run = MagicMock()
conn._run.return_value = (0, 'stdout', 'stderr')
conn.get_option = MagicMock()
conn.get_option.return_value = True
res, stdout, stderr = conn.exec_command('ssh')
res, stdout, stderr = conn.exec_command('ssh', 'this is some data')
def test_plugins_connection_ssh__examine_output(self):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn.check_password_prompt = MagicMock()
conn.check_become_success = MagicMock()
conn.check_incorrect_password = MagicMock()
conn.check_missing_password = MagicMock()
def _check_password_prompt(line):
if b'foo' in line:
return True
return False
def _check_become_success(line):
if b'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz' in line:
return True
return False
def _check_incorrect_password(line):
if b'incorrect password' in line:
return True
return False
def _check_missing_password(line):
if b'bad password' in line:
return True
return False
conn.become.check_password_prompt = MagicMock(side_effect=_check_password_prompt)
conn.become.check_become_success = MagicMock(side_effect=_check_become_success)
conn.become.check_incorrect_password = MagicMock(side_effect=_check_incorrect_password)
conn.become.check_missing_password = MagicMock(side_effect=_check_missing_password)
# test examining output for prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = True
conn.become.prompt = True
def get_option(option):
if option == 'become_pass':
return 'password'
return None
conn.become.get_option = get_option
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nfoo\nline 3\nthis should be the remainder', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'this should be the remainder')
self.assertTrue(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become prompt
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
conn.become.success = u'BECOME-SUCCESS-abcdefghijklmnopqrstuvxyz'
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nBECOME-SUCCESS-abcdefghijklmnopqrstuvxyz\nline 3\n', False)
self.assertEqual(output, b'line 1\nline 2\nline 3\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertTrue(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for become failure
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nline 2\nincorrect password\n', True)
self.assertEqual(output, b'line 1\nline 2\nincorrect password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertTrue(conn._flags['become_error'])
self.assertFalse(conn._flags['become_nopasswd_error'])
# test examining output for missing password
conn._flags = dict(
become_prompt=False,
become_success=False,
become_error=False,
become_nopasswd_error=False,
)
pc.prompt = False
conn.become.prompt = False
pc.success_key = None
output, unprocessed = conn._examine_output(u'source', u'state', b'line 1\nbad password\n', True)
self.assertEqual(output, b'line 1\nbad password\n')
self.assertEqual(unprocessed, b'')
self.assertFalse(conn._flags['become_prompt'])
self.assertFalse(conn._flags['become_success'])
self.assertFalse(conn._flags['become_error'])
self.assertTrue(conn._flags['become_nopasswd_error'])
@patch('time.sleep')
@patch('os.path.exists')
def test_plugins_connection_ssh_put_file(self, mock_ospe, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
mock_ospe.return_value = True
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'put', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.put_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'put',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.put_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
# test that a not-found path raises an error
mock_ospe.return_value = False
conn._bare_run.return_value = (0, 'stdout', '')
self.assertRaises(AnsibleFileNotFound, conn.put_file, '/path/to/bad/file', '/remote/path/to/file')
@patch('time.sleep')
def test_plugins_connection_ssh_fetch_file(self, mock_sleep):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn._build_command = MagicMock()
conn._bare_run = MagicMock()
conn._load_name = 'ssh'
conn._build_command.return_value = 'some command to run'
conn._bare_run.return_value = (0, '', '')
conn.host = "some_host"
C.ANSIBLE_SSH_RETRIES = 9
# Test with C.DEFAULT_SCP_IF_SSH set to smart
# Test when SFTP works
C.DEFAULT_SCP_IF_SSH = 'smart'
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.set_options({})
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# Test when SFTP doesn't work but SCP does
conn._bare_run.side_effect = [(1, 'stdout', 'some errors'), (0, '', '')]
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn._bare_run.side_effect = None
# test with C.DEFAULT_SCP_IF_SSH enabled
C.DEFAULT_SCP_IF_SSH = True
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', None, checkrc=False)
# test with C.DEFAULT_SCP_IF_SSH disabled
C.DEFAULT_SCP_IF_SSH = False
expected_in_data = b' '.join((b'get', to_bytes(shlex_quote('/path/to/in/file')), to_bytes(shlex_quote('/path/to/dest/file')))) + b'\n'
conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
expected_in_data = b' '.join((b'get',
to_bytes(shlex_quote('/path/to/in/file/with/unicode-fΓΆγ©')),
to_bytes(shlex_quote('/path/to/dest/file/with/unicode-fΓΆγ©')))) + b'\n'
conn.fetch_file(u'/path/to/in/file/with/unicode-fΓΆγ©', u'/path/to/dest/file/with/unicode-fΓΆγ©')
conn._bare_run.assert_called_with('some command to run', expected_in_data, checkrc=False)
# test that a non-zero rc raises an error
conn._bare_run.return_value = (1, 'stdout', 'some errors')
self.assertRaises(AnsibleError, conn.fetch_file, '/path/to/bad/file', '/remote/path/to/file')
class MockSelector(object):
def __init__(self):
self.files_watched = 0
self.register = MagicMock(side_effect=self._register)
self.unregister = MagicMock(side_effect=self._unregister)
self.close = MagicMock()
self.get_map = MagicMock(side_effect=self._get_map)
self.select = MagicMock()
def _register(self, *args, **kwargs):
self.files_watched += 1
def _unregister(self, *args, **kwargs):
self.files_watched -= 1
def _get_map(self, *args, **kwargs):
return self.files_watched
@pytest.fixture
def mock_run_env(request, mocker):
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('ssh', pc, new_stdin)
conn.set_become_plugin(become_loader.get('sudo'))
conn._send_initial_data = MagicMock()
conn._examine_output = MagicMock()
conn._terminate_process = MagicMock()
conn._load_name = 'ssh'
conn.sshpass_pipe = [MagicMock(), MagicMock()]
request.cls.pc = pc
request.cls.conn = conn
mock_popen_res = MagicMock()
mock_popen_res.poll = MagicMock()
mock_popen_res.wait = MagicMock()
mock_popen_res.stdin = MagicMock()
mock_popen_res.stdin.fileno.return_value = 1000
mock_popen_res.stdout = MagicMock()
mock_popen_res.stdout.fileno.return_value = 1001
mock_popen_res.stderr = MagicMock()
mock_popen_res.stderr.fileno.return_value = 1002
mock_popen_res.returncode = 0
request.cls.mock_popen_res = mock_popen_res
mock_popen = mocker.patch('subprocess.Popen', return_value=mock_popen_res)
request.cls.mock_popen = mock_popen
request.cls.mock_selector = MockSelector()
mocker.patch('ansible.module_utils.compat.selectors.DefaultSelector', lambda: request.cls.mock_selector)
request.cls.mock_openpty = mocker.patch('pty.openpty')
mocker.patch('fcntl.fcntl')
mocker.patch('os.write')
mocker.patch('os.close')
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRun(object):
# FIXME:
# These tests are little more than a smoketest. Need to enhance them
# a bit to check that they're calling the relevant functions and making
# complete coverage of the code paths
def test_no_escalation(self):
self.mock_popen_res.stdout.read.side_effect = [b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"my_stderr"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_with_password(self):
# test with a password set to trigger the sshpass write
self.pc.password = '12345'
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run(["ssh", "is", "a", "cmd"], "this is more data")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is more data'
def _password_with_prompt_examine_output(self, sourice, state, b_chunk, sudoable):
if state == 'awaiting_prompt':
self.conn._flags['become_prompt'] = True
elif state == 'awaiting_escalation':
self.conn._flags['become_success'] = True
return (b'', b'')
def test_password_with_prompt(self):
# test with password prompting enabled
self.pc.password = None
self.conn.become.prompt = b'Password:'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"Success", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ),
(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
assert return_code == 0
assert b_stdout == b''
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_password_with_become(self):
# test with some become settings
self.pc.prompt = b'Password:'
self.conn.become.prompt = b'Password:'
self.pc.become = True
self.pc.success_key = 'BECOME-SUCCESS-abcdefg'
self.conn.become._id = 'abcdefg'
self.conn._examine_output.side_effect = self._password_with_prompt_examine_output
self.mock_popen_res.stdout.read.side_effect = [b"Password:", b"BECOME-SUCCESS-abcdefg", b"abc"]
self.mock_popen_res.stderr.read.side_effect = [b"123"]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "this is input data")
self.mock_popen_res.stdin.flush.assert_called_once_with()
assert return_code == 0
assert b_stdout == b'abc'
assert b_stderr == b'123'
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is True
assert self.conn._send_initial_data.call_count == 1
assert self.conn._send_initial_data.call_args[0][1] == 'this is input data'
def test_pasword_without_data(self):
# simulate no data input but Popen using new pty's fails
self.mock_popen.return_value = None
self.mock_popen.side_effect = [OSError(), self.mock_popen_res]
# simulate no data input
self.mock_openpty.return_value = (98, 99)
self.mock_popen_res.stdout.read.side_effect = [b"some data", b"", b""]
self.mock_popen_res.stderr.read.side_effect = [b""]
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[]]
self.mock_selector.get_map.side_effect = lambda: True
return_code, b_stdout, b_stderr = self.conn._run("ssh", "")
assert return_code == 0
assert b_stdout == b'some data'
assert b_stderr == b''
assert self.mock_selector.register.called is True
assert self.mock_selector.register.call_count == 2
assert self.conn._send_initial_data.called is False
@pytest.mark.usefixtures('mock_run_env')
class TestSSHConnectionRetries(object):
def test_incorrect_password(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 5)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b'']
self.mock_popen_res.stderr.read.side_effect = [b'Permission denied, please try again.\r\n']
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[5] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = [b'sshpass', b'-d41', b'ssh', b'-C']
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
exception_info = pytest.raises(AnsibleAuthenticationFailure, self.conn.exec_command, 'sshpass', 'some data')
assert exception_info.value.message == ('Invalid/incorrect username/password. Skipping remaining 5 retries to prevent account lockout: '
'Permission denied, please try again.')
assert self.mock_popen.call_count == 1
def test_retry_then_success(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 3 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
return_code, b_stdout, b_stderr = self.conn.exec_command('ssh', 'some data')
assert return_code == 0
assert b_stdout == b'my_stdout\nsecond_line'
assert b_stderr == b'my_stderr'
def test_multiple_failures(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.mock_popen_res.stdout.read.side_effect = [b""] * 10
self.mock_popen_res.stderr.read.side_effect = [b""] * 10
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 30)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
] * 10
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
pytest.raises(AnsibleConnectionFailure, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_abitrary_exceptions(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 9)
monkeypatch.setattr('time.sleep', lambda x: None)
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'ssh'
self.conn.get_option = MagicMock()
self.conn.get_option.return_value = True
self.mock_popen.side_effect = [Exception('bad')] * 10
pytest.raises(Exception, self.conn.exec_command, 'ssh', 'some data')
assert self.mock_popen.call_count == 10
def test_put_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.put_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
def test_fetch_file_retries(self, monkeypatch):
monkeypatch.setattr(C, 'HOST_KEY_CHECKING', False)
monkeypatch.setattr(C, 'ANSIBLE_SSH_RETRIES', 3)
monkeypatch.setattr('time.sleep', lambda x: None)
monkeypatch.setattr('ansible.plugins.connection.ssh.os.path.exists', lambda x: True)
self.mock_popen_res.stdout.read.side_effect = [b"", b"my_stdout\n", b"second_line"]
self.mock_popen_res.stderr.read.side_effect = [b"", b"my_stderr"]
type(self.mock_popen_res).returncode = PropertyMock(side_effect=[255] * 4 + [0] * 4)
self.mock_selector.select.side_effect = [
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stdout, 1001, [EVENT_READ], None), EVENT_READ)],
[(SelectorKey(self.mock_popen_res.stderr, 1002, [EVENT_READ], None), EVENT_READ)],
[]
]
self.mock_selector.get_map.side_effect = lambda: True
self.conn._build_command = MagicMock()
self.conn._build_command.return_value = 'sftp'
return_code, b_stdout, b_stderr = self.conn.fetch_file('/path/to/in/file', '/path/to/dest/file')
assert return_code == 0
assert b_stdout == b"my_stdout\nsecond_line"
assert b_stderr == b"my_stderr"
assert self.mock_popen.call_count == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,206 |
Plays with Async leave files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We a playbook that is using `ansible_ssh_common_args` which proxy the command through a jumphost using ssh keys. The play we are running uses `command` to copy files between hosts, the play also has `async: 600` on it.
The issue is that under the `/home/ansible/.ansible_async` folder leaves files behind that have the output of the single async file copy and _never_ cleans up the files. Over time this leads to the system running out of inodes as for every file that was copied a file is created, saved, and never deleted after the play.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`async` task flag
##### ANSIBLE VERSION
```paste below
ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_124793_637oraq9/tmpu7_ol0_8 as it did not pass its verify_file() method
Parsed /tmp/awx_124793_637oraq9/tmpu7_ol0_8 inventory source with script plugin
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Defaults
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Jumphost: CentOS 7
Target OS: AIX 7.1
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook
```yaml
- name: copy files
hosts: all
gather_facts: yes
vars:
source_system_paths:
- "/test"
source_host: ""
tasks:
- name: Sync Folders cmd
become: true
become_user: ansible
command: time sudo rsync -av -e "/usr/bin/ssh -o StrictHostKeyChecking=no -i /home/ansible/.ssh/id_rsa" ansible@"{{ source_host }}":"{{ item }}" "{{ item }}"
with_items:
- "{{ source_system_paths }}"
loop_control:
extended: yes
async: 600
poll: 5
```
Run ansible with following flags:
```
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p {{ jh_ssh_user }}@{{ jh_ip }} -i $JH_SSH_PRIVATE_KEY -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'
```
In the home directory of the anisble user on the target host you will find `~/.ansible_async` where files will be created for each file that was copied.
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The folder or its contents from the run, `~/.ansible_async` is deleted completely or the files that were created by the playbook are deleted.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
FIles are never deleted and over time the system will run out of space, inodes, etc... as there are files left behind from _all_ previous runs.
|
https://github.com/ansible/ansible/issues/73206
|
https://github.com/ansible/ansible/pull/73760
|
a165c720731243a1d26c070f532447283bb16e25
|
78d3810fdf7c579be5d9be8412844ae79d3f313b
| 2021-01-12T23:03:53Z |
python
| 2021-03-04T19:06:27Z |
changelogs/fragments/73760-async-cleanup.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,206 |
Plays with Async leave files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We a playbook that is using `ansible_ssh_common_args` which proxy the command through a jumphost using ssh keys. The play we are running uses `command` to copy files between hosts, the play also has `async: 600` on it.
The issue is that under the `/home/ansible/.ansible_async` folder leaves files behind that have the output of the single async file copy and _never_ cleans up the files. Over time this leads to the system running out of inodes as for every file that was copied a file is created, saved, and never deleted after the play.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`async` task flag
##### ANSIBLE VERSION
```paste below
ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_124793_637oraq9/tmpu7_ol0_8 as it did not pass its verify_file() method
Parsed /tmp/awx_124793_637oraq9/tmpu7_ol0_8 inventory source with script plugin
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Defaults
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Jumphost: CentOS 7
Target OS: AIX 7.1
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook
```yaml
- name: copy files
hosts: all
gather_facts: yes
vars:
source_system_paths:
- "/test"
source_host: ""
tasks:
- name: Sync Folders cmd
become: true
become_user: ansible
command: time sudo rsync -av -e "/usr/bin/ssh -o StrictHostKeyChecking=no -i /home/ansible/.ssh/id_rsa" ansible@"{{ source_host }}":"{{ item }}" "{{ item }}"
with_items:
- "{{ source_system_paths }}"
loop_control:
extended: yes
async: 600
poll: 5
```
Run ansible with following flags:
```
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p {{ jh_ssh_user }}@{{ jh_ip }} -i $JH_SSH_PRIVATE_KEY -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'
```
In the home directory of the anisble user on the target host you will find `~/.ansible_async` where files will be created for each file that was copied.
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The folder or its contents from the run, `~/.ansible_async` is deleted completely or the files that were created by the playbook are deleted.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
FIles are never deleted and over time the system will run out of space, inodes, etc... as there are files left behind from _all_ previous runs.
|
https://github.com/ansible/ansible/issues/73206
|
https://github.com/ansible/ansible/pull/73760
|
a165c720731243a1d26c070f532447283bb16e25
|
78d3810fdf7c579be5d9be8412844ae79d3f313b
| 2021-01-12T23:03:53Z |
python
| 2021-03-04T19:06:27Z |
docs/docsite/rst/user_guide/playbooks_async.rst
|
.. _playbooks_async:
Asynchronous actions and polling
================================
By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
.. contents::
:local:
Asynchronous ad hoc tasks
-------------------------
You can execute long-running operations in the background with :ref:`ad hoc tasks <intro_adhoc>`. For example, to execute ``long_running_operation`` asynchronously in the background, with a timeout (``-B``) of 3600 seconds, and without polling (``-P``)::
$ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
To check on the job status later, use the ``async_status`` module, passing it the job ID that was returned when you ran the original job in the background::
$ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
Ansible can also check on the status of your long-running job automatically with polling. In most cases, Ansible will keep the connection to your remote node open between polls. To run for 30 minutes and poll for status every 60 seconds::
$ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
Poll mode is smart so all jobs will be started before polling begins on any machine. Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (``-B``), the process on the remote nodes will be terminated.
Asynchronous mode is best suited to long-running shell commands or software upgrades. Running the copy module asynchronously, for example, does not do a background file transfer.
Asynchronous playbook tasks
---------------------------
:ref:`Playbooks <working_with_playbooks>` also support asynchronous mode and polling, with a simplified syntax. You can use asynchronous mode in playbooks to avoid connection timeouts or to avoid blocking subsequent tasks. The behavior of asynchronous mode in a playbook depends on the value of `poll`.
Avoid connection timeouts: poll > 0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to set a longer timeout limit for a certain task in your playbook, use ``async`` with ``poll`` set to a positive value. Ansible will still block the next task in your playbook, waiting until the async task either completes, fails or times out. However, the task will only time out if it exceeds the timeout limit you set with the ``async`` parameter.
To avoid timeouts on a task, specify its maximum runtime and how frequently you would like to poll for status::
---
- hosts: all
remote_user: root
tasks:
- name: Simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
ansible.builtin.command: /bin/sleep 15
async: 45
poll: 5
.. note::
The default poll value is set by the :ref:`DEFAULT_POLL_INTERVAL` setting.
There is no default for the async time limit. If you leave off the
'async' keyword, the task runs synchronously, which is Ansible's
default.
.. note::
As of Ansible 2.3, async does not support check mode and will fail the
task when run in check mode. See :ref:`check_mode_dry` on how to
skip a task in check mode.
Run tasks concurrently: poll = 0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to run multiple tasks in a playbook concurrently, use ``async`` with ``poll`` set to 0. When you set ``poll: 0``, Ansible starts the task and immediately moves on to the next task without waiting for a result. Each async task runs until it either completes, fails or times out (runs longer than its ``async`` value). The playbook run ends without checking back on async tasks.
To run a playbook task asynchronously::
---
- hosts: all
remote_user: root
tasks:
- name: Simulate long running op, allow to run for 45 sec, fire and forget
ansible.builtin.command: /bin/sleep 15
async: 45
poll: 0
.. note::
Do not specify a poll value of 0 with operations that require exclusive locks (such as yum transactions) if you expect to run other commands later in the playbook against those same resources.
.. note::
Using a higher value for ``--forks`` will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling.
If you need a synchronization point with an async task, you can register it to obtain its job ID and use the :ref:`async_status <async_status_module>` module to observe it in a later task. For example::
- name: Run an async task
ansible.builtin.yum:
name: docker-io
state: present
async: 1000
poll: 0
register: yum_sleeper
- name: Check on an async task
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 100
delay: 10
.. note::
If the value of ``async:`` is not high enough, this will cause the
"check on it later" task to fail because the temporary status file that
the ``async_status:`` is looking for will not have been written or no longer exist
To run multiple asynchronous tasks while limiting the number of tasks running concurrently::
#####################
# main.yml
#####################
- name: Run items asynchronously in batch of two items
vars:
sleep_durations:
- 1
- 2
- 3
- 4
- 5
durations: "{{ item }}"
include_tasks: execute_batch.yml
loop: "{{ sleep_durations | batch(2) | list }}"
#####################
# execute_batch.yml
#####################
- name: Async sleeping for batched_items
ansible.builtin.command: sleep {{ async_item }}
async: 45
poll: 0
loop: "{{ durations }}"
loop_control:
loop_var: "async_item"
register: async_results
- name: Check sync status
async_status:
jid: "{{ async_result_item.ansible_job_id }}"
loop: "{{ async_results.results }}"
loop_control:
loop_var: "async_result_item"
register: async_poll_results
until: async_poll_results.finished
retries: 30
.. seealso::
:ref:`playbooks_strategies`
Options for controlling playbook execution
:ref:`playbooks_intro`
An introduction to playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,206 |
Plays with Async leave files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We a playbook that is using `ansible_ssh_common_args` which proxy the command through a jumphost using ssh keys. The play we are running uses `command` to copy files between hosts, the play also has `async: 600` on it.
The issue is that under the `/home/ansible/.ansible_async` folder leaves files behind that have the output of the single async file copy and _never_ cleans up the files. Over time this leads to the system running out of inodes as for every file that was copied a file is created, saved, and never deleted after the play.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`async` task flag
##### ANSIBLE VERSION
```paste below
ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_124793_637oraq9/tmpu7_ol0_8 as it did not pass its verify_file() method
Parsed /tmp/awx_124793_637oraq9/tmpu7_ol0_8 inventory source with script plugin
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Defaults
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Jumphost: CentOS 7
Target OS: AIX 7.1
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook
```yaml
- name: copy files
hosts: all
gather_facts: yes
vars:
source_system_paths:
- "/test"
source_host: ""
tasks:
- name: Sync Folders cmd
become: true
become_user: ansible
command: time sudo rsync -av -e "/usr/bin/ssh -o StrictHostKeyChecking=no -i /home/ansible/.ssh/id_rsa" ansible@"{{ source_host }}":"{{ item }}" "{{ item }}"
with_items:
- "{{ source_system_paths }}"
loop_control:
extended: yes
async: 600
poll: 5
```
Run ansible with following flags:
```
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p {{ jh_ssh_user }}@{{ jh_ip }} -i $JH_SSH_PRIVATE_KEY -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'
```
In the home directory of the anisble user on the target host you will find `~/.ansible_async` where files will be created for each file that was copied.
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The folder or its contents from the run, `~/.ansible_async` is deleted completely or the files that were created by the playbook are deleted.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
FIles are never deleted and over time the system will run out of space, inodes, etc... as there are files left behind from _all_ previous runs.
|
https://github.com/ansible/ansible/issues/73206
|
https://github.com/ansible/ansible/pull/73760
|
a165c720731243a1d26c070f532447283bb16e25
|
78d3810fdf7c579be5d9be8412844ae79d3f313b
| 2021-01-12T23:03:53Z |
python
| 2021-03-04T19:06:27Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.send_task_result(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
try:
# TODO: remove play_context as this does not take delegation into account, task itself should hold values
# for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
else:
# just use normal host vars
cvars = orig_vars = variables
templar.available_variables = cvars
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
templar.available_variables = orig_vars
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
self._task.action, self._task.args, self._task.module_defaults, templar, self._task._ansible_internal_redirect_list
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_task_result(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host,
async_task,
async_result,
task_fields=self._task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
self._play_context.connection = templar.template(cvars['ansible_connection'])
else:
self._play_context.connection = self._task.connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,206 |
Plays with Async leave files
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We a playbook that is using `ansible_ssh_common_args` which proxy the command through a jumphost using ssh keys. The play we are running uses `command` to copy files between hosts, the play also has `async: 600` on it.
The issue is that under the `/home/ansible/.ansible_async` folder leaves files behind that have the output of the single async file copy and _never_ cleans up the files. Over time this leads to the system running out of inodes as for every file that was copied a file is created, saved, and never deleted after the play.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`async` task flag
##### ANSIBLE VERSION
```paste below
ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_124793_637oraq9/tmpu7_ol0_8 as it did not pass its verify_file() method
Parsed /tmp/awx_124793_637oraq9/tmpu7_ol0_8 inventory source with script plugin
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Defaults
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Jumphost: CentOS 7
Target OS: AIX 7.1
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook
```yaml
- name: copy files
hosts: all
gather_facts: yes
vars:
source_system_paths:
- "/test"
source_host: ""
tasks:
- name: Sync Folders cmd
become: true
become_user: ansible
command: time sudo rsync -av -e "/usr/bin/ssh -o StrictHostKeyChecking=no -i /home/ansible/.ssh/id_rsa" ansible@"{{ source_host }}":"{{ item }}" "{{ item }}"
with_items:
- "{{ source_system_paths }}"
loop_control:
extended: yes
async: 600
poll: 5
```
Run ansible with following flags:
```
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p {{ jh_ssh_user }}@{{ jh_ip }} -i $JH_SSH_PRIVATE_KEY -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"'
```
In the home directory of the anisble user on the target host you will find `~/.ansible_async` where files will be created for each file that was copied.
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The folder or its contents from the run, `~/.ansible_async` is deleted completely or the files that were created by the playbook are deleted.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
FIles are never deleted and over time the system will run out of space, inodes, etc... as there are files left behind from _all_ previous runs.
|
https://github.com/ansible/ansible/issues/73206
|
https://github.com/ansible/ansible/pull/73760
|
a165c720731243a1d26c070f532447283bb16e25
|
78d3810fdf7c579be5d9be8412844ae79d3f313b
| 2021-01-12T23:03:53Z |
python
| 2021-03-04T19:06:27Z |
test/integration/targets/async/tasks/main.yml
|
# test code for the async keyword
# (c) 2014, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: run a 2 second loop
shell: for i in $(seq 1 2); do echo $i ; sleep 1; done;
async: 10
poll: 1
register: async_result
- debug: var=async_result
- name: validate async returns
assert:
that:
- "'ansible_job_id' in async_result"
- "'changed' in async_result"
- "'cmd' in async_result"
- "'delta' in async_result"
- "'end' in async_result"
- "'rc' in async_result"
- "'start' in async_result"
- "'stderr' in async_result"
- "'stdout' in async_result"
- "'stdout_lines' in async_result"
- async_result.rc == 0
- async_result.finished == 1
- async_result is finished
- name: test async without polling
command: sleep 5
async: 30
poll: 0
register: async_result
- debug: var=async_result
- name: validate async without polling returns
assert:
that:
- "'ansible_job_id' in async_result"
- "'started' in async_result"
- async_result.finished == 0
- async_result is not finished
- name: test skipped task handling
command: /bin/true
async: 15
poll: 0
when: False
# test async "fire and forget, but check later"
- name: 'start a task with "fire-and-forget"'
command: sleep 3
async: 30
poll: 0
register: fnf_task
- name: assert task was successfully started
assert:
that:
- fnf_task.started == 1
- fnf_task is started
- "'ansible_job_id' in fnf_task"
- name: 'check on task started as a "fire-and-forget"'
async_status: jid={{ fnf_task.ansible_job_id }}
register: fnf_result
until: fnf_result is finished
retries: 10
delay: 1
- name: assert task was successfully checked
assert:
that:
- fnf_result.finished
- fnf_result is finished
- name: test graceful module failure
async_test:
fail_mode: graceful
async: 30
poll: 1
register: async_result
ignore_errors: true
- name: assert task failed correctly
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result is not changed
- async_result is failed
- async_result.msg == 'failed gracefully'
- name: test exception module failure
async_test:
fail_mode: exception
async: 5
poll: 1
register: async_result
ignore_errors: true
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == false
- async_result is not changed
- async_result.failed == true
- async_result is failed
- async_result.stderr is search('failing via exception', multiline=True)
- name: test leading junk before JSON
async_test:
fail_mode: leading_junk
async: 5
poll: 1
register: async_result
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == true
- async_result is changed
- async_result is successful
- name: test trailing junk after JSON
async_test:
fail_mode: trailing_junk
async: 5
poll: 1
register: async_result
- name: validate response
assert:
that:
- async_result.ansible_job_id is match('\d+\.\d+')
- async_result.finished == 1
- async_result is finished
- async_result.changed == true
- async_result is changed
- async_result is successful
- async_result.warnings[0] is search('trailing junk after module output')
- name: test stderr handling
async_test:
fail_mode: stderr
async: 30
poll: 1
register: async_result
ignore_errors: true
- assert:
that:
- async_result.stderr == "printed to stderr\n"
# NOTE: This should report a warning that cannot be tested
- name: test async properties on non-async task
command: sleep 1
register: non_async_result
- name: validate response
assert:
that:
- non_async_result is successful
- non_async_result is changed
- non_async_result is finished
- "'ansible_job_id' not in non_async_result"
- name: set fact of custom tmp dir
set_fact:
custom_async_tmp: ~/.ansible_async_test
- name: ensure custom async tmp dir is absent
file:
path: '{{ custom_async_tmp }}'
state: absent
- block:
- name: run async task with custom dir
command: sleep 1
register: async_custom_dir
async: 5
poll: 1
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: check if the async temp dir is created
stat:
path: '{{ custom_async_tmp }}'
register: async_custom_dir_result
- name: assert run async task with custom dir
assert:
that:
- async_custom_dir is successful
- async_custom_dir is finished
- async_custom_dir_result.stat.exists
- name: remove custom async dir again
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: run async task with custom dir - deprecated format
command: sleep 1
register: async_custom_dir_dep
async: 5
poll: 1
environment:
ANSIBLE_ASYNC_DIR: '{{ custom_async_tmp }}'
- name: check if the async temp dir is created - deprecated format
stat:
path: '{{ custom_async_tmp }}'
register: async_custom_dir_dep_result
- name: assert run async task with custom dir - deprecated format
assert:
that:
- async_custom_dir_dep is successful
- async_custom_dir_dep is finished
- async_custom_dir_dep_result.stat.exists
- name: remove custom async dir after deprecation test
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: run fire and forget async task with custom dir
command: echo moo
register: async_fandf_custom_dir
async: 5
poll: 0
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: fail to get async status with custom dir with defaults
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_fail
ignore_errors: yes
- name: get async status with custom dir using newer format
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_result
vars:
ansible_async_dir: '{{ custom_async_tmp }}'
- name: get async status with custom dir - deprecated format
async_status:
jid: '{{ async_fandf_custom_dir.ansible_job_id }}'
register: async_fandf_custom_dir_dep_result
environment:
ANSIBLE_ASYNC_DIR: '{{ custom_async_tmp }}'
- name: assert run fire and forget async task with custom dir
assert:
that:
- async_fandf_custom_dir is successful
- async_fandf_custom_dir_fail is failed
- async_fandf_custom_dir_fail.msg == "could not find job"
- async_fandf_custom_dir_result is successful
- async_fandf_custom_dir_dep_result is successful
always:
- name: remove custom tmp dir after test
file:
path: '{{ custom_async_tmp }}'
state: absent
- name: Test that async has stdin
command: >
{{ ansible_python_interpreter|default('/usr/bin/python') }} -c 'import os; os.fdopen(os.dup(0), "r")'
async: 1
poll: 1
- name: run async poll callback test playbook
command: ansible-playbook {{ role_path }}/callback_test.yml
register: callback_output
- assert:
that:
- '"ASYNC POLL on localhost" in callback_output.stdout'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,883 |
[4] Make sure we document collections version questions/answers
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We listed likely questions about collection versioning in https://github.com/ansible-collections/overview/issues/37. Make sure these questions are answered in the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71883
|
https://github.com/ansible/ansible/pull/73781
|
1ac2858b5a3bf41747e6b54ea147059f85aae918
|
474f46ea565910a0009a261efbee0c2326938a9b
| 2020-09-23T15:30:02Z |
python
| 2021-03-11T19:25:18Z |
docs/docsite/rst/reference_appendices/release_and_maintenance.rst
|
.. _release_and_maintenance:
Release and maintenance
=======================
This section describes the Ansible and ``ansible-core`` releases.
Ansible is the package that most users install. ``ansible-core`` is
primarily for developers.
.. contents::
:local:
.. _release_cycle:
Ansible release cycle
-----------------------
Ansible is developed and released on a flexible release cycle.
This cycle can be extended in order to allow for larger changes to be properly
implemented and tested before a new release is made available. See :ref:`roadmaps` for upcoming release details.
For Ansible version 2.10 or later, the major release is maintained for one release cycle. When the next release comes out (for example, 2.11), the older release (2.10 in this example) is no longer maintained.
If you are using a release of Ansible that is no longer maintained, we strongly
encourage you to upgrade as soon as possible in order to benefit from the
latest features and security fixes.
Older, unmaintained versions of Ansible can contain unfixed security
vulnerabilities (*CVE*).
You can refer to the :ref:`porting guides<porting_guides>` for tips on updating your Ansible
playbooks to run on newer versions. For Ansible 2.10 and later releases, you can install the Ansible package with ``pip``. See :ref:`intro_installation_guide` for details. For older releases, You can download the Ansible release from `<https://releases.ansible.com/ansible/>`_.
This table links to the release notes for each major Ansible release. These release notes (changelogs) contain the dates and significant changes in each minor release.
================================== =================================================
Ansible Release Status
================================== =================================================
devel In development (2.11 unreleased, trunk)
`2.10 Release Notes`_ In development (2.10 alpha/beta)
`2.9 Release Notes`_ Maintained (security **and** general bug fixes)
`2.8 Release Notes`_ Maintained (security fixes)
`2.7 Release Notes`_ Unmaintained (end of life)
`2.6 Release Notes`_ Unmaintained (end of life)
`2.5 Release Notes`_ Unmaintained (end of life)
<2.5 Unmaintained (end of life)
================================== =================================================
.. Comment: devel used to point here but we're currently revamping our changelog process and have no
link to a static changelog for devel _2.6: https://github.com/ansible/ansible/blob/devel/CHANGELOG.md
.. _2.10 Release Notes:
.. _2.10: https://github.com/ansible-community/ansible-build-data/blob/main/2.10/CHANGELOG-v2.10.rst
.. _2.9 Release Notes:
.. _2.9: https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v2.9.rst
.. _2.8 Release Notes:
.. _2.8: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8.rst
.. _2.7 Release Notes: https://github.com/ansible/ansible/blob/stable-2.7/changelogs/CHANGELOG-v2.7.rst
.. _2.6 Release Notes:
.. _2.6: https://github.com/ansible/ansible/blob/stable-2.6/changelogs/CHANGELOG-v2.6.rst
.. _2.5 Release Notes: https://github.com/ansible/ansible/blob/stable-2.5/changelogs/CHANGELOG-v2.5.rst
ansible-core release cycle
--------------------------
``ansible-core`` is developed and released on a flexible release cycle.
This cycle can be extended in order to allow for larger changes to be properly
implemented and tested before a new release is made available. See :ref:`roadmaps` for upcoming release details.
``ansible-core`` has a graduated maintenance structure that extends to three major releases.
For more information, read about the :ref:`development_and_stable_version_maintenance_workflow` or
see the chart in :ref:`release_schedule` for the degrees to which current releases are maintained.
If you are using a release of ``ansible-core`` that is no longer maintained, we strongly
encourage you to upgrade as soon as possible in order to benefit from the
latest features and security fixes.
Older, unmaintained versions of ``ansible-core`` can contain unfixed security
vulnerabilities (*CVE*).
You can refer to the :ref:`porting guides<porting_guides>` for tips on updating your Ansible
playbooks to run on newer versions.
You can install ``ansible-core`` with ``pip``. See :ref:`intro_installation_guide` for details.
.. note:: ``ansible-core`` maintenance continues for 3 releases. Thus the latest release receives
security and general bug fixes when it is first released, security and critical bug fixes when
the next ``ansible-core`` version is released, and **only** security fixes once the follow on
to that version is released.
.. _release_schedule:
This table links to the release notes for each major ``ansible-core``
release. These release notes (changelogs) contain the dates and
significant changes in each minor release.
============================================= ======================================================
``ansible-core`` / ``ansible-base`` Release Status
============================================= ======================================================
devel In development (ansible-core 2.11 unreleased, trunk)
`2.10 ansible-base Release Notes`_ Maintained (security **and** general bug fixes)
============================================= ======================================================
.. _2.10 ansible-base Release Notes:
.. _2.10-base: https://github.com/ansible/ansible/blob/stable-2.10/changelogs/CHANGELOG-v2.10.rst
.. _support_life:
.. _methods:
.. _development_and_stable_version_maintenance_workflow:
Development and stable version maintenance workflow
-----------------------------------------------------
The Ansible community develops and maintains Ansible and ``ansible-core`` on GitHub_.
Collection updates (new modules, plugins, features and bugfixes) will always be integrated in what will become the next version of Ansible. This work is tracked within the individual collection repositories.
Ansible and ``ansible-core`` provide bugfixes and security improvements
for the most recent major release. The previous major release of
``ansible-core`` will only receive fixes for security issues and
critical bugs. ``ansible-core`` only applies security fixes to releases
which are two releases old. This work is tracked on the
``stable-<version>`` git branches.
The fixes that land in maintained stable branches will eventually be released
as a new version when necessary.
Note that while there are no guarantees for providing fixes for unmaintained
releases of Ansible, there can sometimes be exceptions for critical issues.
.. _GitHub: https://github.com/ansible/ansible
.. _release_changelogs:
Changelogs
^^^^^^^^^^^^
We generate changelogs based on fragments. Here is the generated changelog for 2.9_ as an example. When creating new features or fixing bugs, create a changelog fragment describing the change. A changelog entry is not needed for new modules or plugins. Details for those items will be generated from the module documentation.
We've got :ref:`examples and instructions on creating changelog fragments <changelogs_how_to>` in the Community Guide.
Release candidates
^^^^^^^^^^^^^^^^^^^
Before a new release or version of Ansible or ``ansible-core`` can be
done, it will typically go through a release candidate process.
This provides the Ansible community the opportunity to test these releases and report
bugs or issues they might come across.
Ansible and ``ansible-core`` tag the first release candidate (``RC1``)
which is usually scheduled to last five business days. The final release
is done if no major bugs or issues are identified during this period.
If there are major problems with the first candidate, a second candidate will
be tagged (``RC2``) once the necessary fixes have landed.
This second candidate lasts for a shorter duration than the first.
If no problems have been reported after two business days, the final release is
done.
More release candidates can be tagged as required, so long as there are
bugs that the Ansible or ``ansible-core`` core maintainers consider
should be fixed before the final release.
.. _release_freezing:
Feature freeze
^^^^^^^^^^^^^^^
While there is a pending release candidate, the focus of core developers and
maintainers will on fixes towards the release candidate.
Merging new features or fixes that are not related to the release candidate may
be delayed in order to allow the new release to be shipped as soon as possible.
Deprecation Cycle
------------------
Sometimes we need to remove a feature, normally in favor of a reimplementation that we hope does a better job.
To do this we have a deprecation cycle. First we mark a feature as 'deprecated'. This is normally accompanied with warnings
to the user as to why we deprecated it, what alternatives they should switch to and when (which version) we are scheduled
to remove the feature permanently.
Ansible deprecation cycle
^^^^^^^^^^^^^^^^^^^^^^^^^
Since Ansible is a package of individual collections, the deprecation cycle depends on the collection maintainers. We recommend the collection maintainers deprecate a feature in one Ansible major version and do not remove that feature for one year, or at least until the next major Ansible version. For example, deprecate the feature in 2.10.2, and do not remove the feature until 2.12.0. Collections should use semantic versioning, such that the major collection version cannot be changed within an Ansible major version. Thus the removal should not happen before the next major Ansible release. This is up to each collection maintainer and cannot be guaranteed.
ansible-core deprecation cycle
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The cycle is normally across 4 feature releases (2.x.y, where the x marks a feature release and the y a bugfix release),
so the feature is normally removed in the 4th release after we announce the deprecation.
For example, something deprecated in 2.9 will be removed in 2.13, assuming we don't jump to 3.x before that point.
The tracking is tied to the number of releases, not the release numbering.
For modules/plugins, we keep the documentation after the removal for users of older versions.
.. seealso::
:ref:`community_committer_guidelines`
Guidelines for Ansible core contributors and maintainers
:ref:`testing_strategies`
Testing strategies
:ref:`ansible_community_guide`
Community information and contributing
`Development Mailing List <https://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,264 |
Document actions required by the ALLOW_WORLD_READABLE_TMPFILES setting deprecation
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
End user changes required by the deprecation of `ALLOW_WORLD_READABLE_TMPFILES` in 2.10 and its future removal in 2.14 are not documented.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs/docsite/rst/user_guide/become.rst
docs/docsite/rst/reference_appendices/config.rst
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /redacted/path/to/ansible.cfg
configured module search path = ['/Users/jklaiho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Oct 15 2020, 14:38:39) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
`become.rst` has no information regarding the deprecation yet. `config.rst` has the following bits that serve only to confuse the end user:
> Deprecated detail
> moved to a per plugin approach that is more flexible.
>
> Deprecated alternatives
> mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
In addition, the deprecation warning itself has poor grammar and typos:
> [DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible. , use mostly the same config will work, but now controlled from the plugin itself and not using the general constant. instead. This feature will be removed in version 2.14. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
The docs need a clear explanation of what actions are actually required by end users before ALLOW_WORLD_READABLE_TMPFILES goes away in 2.14. What configuration needs to be put where? What is this "per plugin approach" and what plugins is this in reference to? And so on.
|
https://github.com/ansible/ansible/issues/72264
|
https://github.com/ansible/ansible/pull/73825
|
27eaab310beff3f22ad56f3a8524e9e18dba63e8
|
8ef54759ec80b6bdecb58fb0d0262bc47c963f3d
| 2020-10-20T12:28:35Z |
python
| 2021-03-11T19:43:27Z |
changelogs/fragments/allow_world_readable_move.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,264 |
Document actions required by the ALLOW_WORLD_READABLE_TMPFILES setting deprecation
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
End user changes required by the deprecation of `ALLOW_WORLD_READABLE_TMPFILES` in 2.10 and its future removal in 2.14 are not documented.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs/docsite/rst/user_guide/become.rst
docs/docsite/rst/reference_appendices/config.rst
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /redacted/path/to/ansible.cfg
configured module search path = ['/Users/jklaiho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Oct 15 2020, 14:38:39) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
`become.rst` has no information regarding the deprecation yet. `config.rst` has the following bits that serve only to confuse the end user:
> Deprecated detail
> moved to a per plugin approach that is more flexible.
>
> Deprecated alternatives
> mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
In addition, the deprecation warning itself has poor grammar and typos:
> [DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible. , use mostly the same config will work, but now controlled from the plugin itself and not using the general constant. instead. This feature will be removed in version 2.14. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
The docs need a clear explanation of what actions are actually required by end users before ALLOW_WORLD_READABLE_TMPFILES goes away in 2.14. What configuration needs to be put where? What is this "per plugin approach" and what plugins is this in reference to? And so on.
|
https://github.com/ansible/ansible/issues/72264
|
https://github.com/ansible/ansible/pull/73825
|
27eaab310beff3f22ad56f3a8524e9e18dba63e8
|
8ef54759ec80b6bdecb58fb0d0262bc47c963f3d
| 2020-10-20T12:28:35Z |
python
| 2021-03-11T19:43:27Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
oracle: *rhelish
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,876 |
ansible-test fails to find unit tests for inventory and connection plugins when running with --changed in collections
|
### Summary
ansible-test isn't triggering unit tests for connection or inventory plugins:
```
[β mchappel@mchappel aws] $ find tests/unit/plugins/connection/
tests/unit/plugins/connection/
tests/unit/plugins/connection/__init__.py
tests/unit/plugins/connection/test_aws_ssm.py
[β mchappel@mchappel aws] $ ansible-test units --color -v --coverage-check --changed --remote-terminate always --remote-stage prod --docker --python 3.7 --base-branch main
Run command: git symbolic-ref --short HEAD
Run command: git for-each-ref refs/heads/ --format '%(refname:strip=2)'
Run command: git merge-base --fork-point main
Run command: git ls-files -z --cached
Run command: git ls-files -z --others --exclude-standard
Run command: git diff --name-only --no-renames -z 111f4308966e8cd0c66f75b787fbb6f75477d595 HEAD
Run command: git diff --name-only --no-renames -z --cached
Run command: git diff --name-only --no-renames -z
Run command: git -c core.quotePath= diff 111f4308966e8cd0c66f75b787fbb6f75477d595
Detected branch units/2021-03-12 forked from main at commit 111f4308966e8cd0c66f75b787fbb6f75477d595
Detected changes in 1 file(s).
plugins/connection/aws_ssm.py
Mapping 1 changed file(s) to tests.
NOTICE: Omitted 1 file(s) that triggered no tests.
WARNING: No tests found for detected changes.
```
```
[β mchappel@mchappel aws] $ find tests/unit/plugins/inventory/
tests/unit/plugins/inventory/
tests/unit/plugins/inventory/__init__.py
tests/unit/plugins/inventory/test_aws_ec2.py
[β mchappel@mchappel aws] $ ansible-test units --color -v --coverage-check --changed --remote-terminate always --remote-stage prod --docker --python 3.7 --base-branch unit-tests/2021-03-12
Run command: git symbolic-ref --short HEAD
Run command: git for-each-ref refs/heads/ --format '%(refname:strip=2)'
Run command: git merge-base --fork-point unit-tests/2021-03-12
Run command: git ls-files -z --cached
Run command: git ls-files -z --others --exclude-standard
Run command: git diff --name-only --no-renames -z dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1 HEAD
Run command: git diff --name-only --no-renames -z --cached
Run command: git diff --name-only --no-renames -z
Run command: git -c core.quotePath= diff dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1
Detected branch unit-tests/2021-03-12-tmp forked from unit-tests/2021-03-12 at commit dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1
WARNING: Ignored 1 untracked file(s). Use --untracked to include them.
Detected changes in 1 file(s).
plugins/inventory/aws_ec2.py
Mapping 1 changed file(s) to tests.
NOTICE: Omitted 1 file(s) that triggered no tests.
WARNING: No tests found for detected changes.
```
This looks like it's because `test/unit/` (test in the singular) is still hard-coded in `test/lib/ansible_test/_internal/classification.py`
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.11.0.dev0 (devel 4c5ce5a1a9) last updated 2021/02/11 13:33:25 (GMT +200)
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL8
### Steps to Reproduce
- Make a change to a connection or inventory plugin in a collection (which has unit tests)
- Open a PR
### Expected Results
Unit tests should run for the plugin
### Actual Results
Unit tests do not run
|
https://github.com/ansible/ansible/issues/73876
|
https://github.com/ansible/ansible/pull/73877
|
3a8c9242e1a9175101fb813d785f7329b6882eea
|
ed18fcac3b9d4108595d7f99ac0a765e8fdb22cf
| 2021-03-12T13:52:32Z |
python
| 2021-03-12T20:46:40Z |
changelogs/fragments/73876-ansible_test-units.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,876 |
ansible-test fails to find unit tests for inventory and connection plugins when running with --changed in collections
|
### Summary
ansible-test isn't triggering unit tests for connection or inventory plugins:
```
[β mchappel@mchappel aws] $ find tests/unit/plugins/connection/
tests/unit/plugins/connection/
tests/unit/plugins/connection/__init__.py
tests/unit/plugins/connection/test_aws_ssm.py
[β mchappel@mchappel aws] $ ansible-test units --color -v --coverage-check --changed --remote-terminate always --remote-stage prod --docker --python 3.7 --base-branch main
Run command: git symbolic-ref --short HEAD
Run command: git for-each-ref refs/heads/ --format '%(refname:strip=2)'
Run command: git merge-base --fork-point main
Run command: git ls-files -z --cached
Run command: git ls-files -z --others --exclude-standard
Run command: git diff --name-only --no-renames -z 111f4308966e8cd0c66f75b787fbb6f75477d595 HEAD
Run command: git diff --name-only --no-renames -z --cached
Run command: git diff --name-only --no-renames -z
Run command: git -c core.quotePath= diff 111f4308966e8cd0c66f75b787fbb6f75477d595
Detected branch units/2021-03-12 forked from main at commit 111f4308966e8cd0c66f75b787fbb6f75477d595
Detected changes in 1 file(s).
plugins/connection/aws_ssm.py
Mapping 1 changed file(s) to tests.
NOTICE: Omitted 1 file(s) that triggered no tests.
WARNING: No tests found for detected changes.
```
```
[β mchappel@mchappel aws] $ find tests/unit/plugins/inventory/
tests/unit/plugins/inventory/
tests/unit/plugins/inventory/__init__.py
tests/unit/plugins/inventory/test_aws_ec2.py
[β mchappel@mchappel aws] $ ansible-test units --color -v --coverage-check --changed --remote-terminate always --remote-stage prod --docker --python 3.7 --base-branch unit-tests/2021-03-12
Run command: git symbolic-ref --short HEAD
Run command: git for-each-ref refs/heads/ --format '%(refname:strip=2)'
Run command: git merge-base --fork-point unit-tests/2021-03-12
Run command: git ls-files -z --cached
Run command: git ls-files -z --others --exclude-standard
Run command: git diff --name-only --no-renames -z dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1 HEAD
Run command: git diff --name-only --no-renames -z --cached
Run command: git diff --name-only --no-renames -z
Run command: git -c core.quotePath= diff dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1
Detected branch unit-tests/2021-03-12-tmp forked from unit-tests/2021-03-12 at commit dc70967e80b2cbbc8f6dccf4985e434cd04a9fc1
WARNING: Ignored 1 untracked file(s). Use --untracked to include them.
Detected changes in 1 file(s).
plugins/inventory/aws_ec2.py
Mapping 1 changed file(s) to tests.
NOTICE: Omitted 1 file(s) that triggered no tests.
WARNING: No tests found for detected changes.
```
This looks like it's because `test/unit/` (test in the singular) is still hard-coded in `test/lib/ansible_test/_internal/classification.py`
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.11.0.dev0 (devel 4c5ce5a1a9) last updated 2021/02/11 13:33:25 (GMT +200)
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL8
### Steps to Reproduce
- Make a change to a connection or inventory plugin in a collection (which has unit tests)
- Open a PR
### Expected Results
Unit tests should run for the plugin
### Actual Results
Unit tests do not run
|
https://github.com/ansible/ansible/issues/73876
|
https://github.com/ansible/ansible/pull/73877
|
3a8c9242e1a9175101fb813d785f7329b6882eea
|
ed18fcac3b9d4108595d7f99ac0a765e8fdb22cf
| 2021-03-12T13:52:32Z |
python
| 2021-03-12T20:46:40Z |
test/lib/ansible_test/_internal/classification.py
|
"""Classify changes in Ansible code."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import os
import re
import time
from . import types as t
from .target import (
walk_module_targets,
walk_integration_targets,
walk_units_targets,
walk_compile_targets,
walk_sanity_targets,
load_integration_prefixes,
analyze_integration_target_dependencies,
)
from .util import (
display,
is_subdir,
)
from .import_analysis import (
get_python_module_utils_imports,
get_python_module_utils_name,
)
from .csharp_import_analysis import (
get_csharp_module_utils_imports,
get_csharp_module_utils_name,
)
from .powershell_import_analysis import (
get_powershell_module_utils_imports,
get_powershell_module_utils_name,
)
from .config import (
TestConfig,
IntegrationConfig,
)
from .metadata import (
ChangeDescription,
)
from .data import (
data_context,
)
FOCUSED_TARGET = '__focused__'
def categorize_changes(args, paths, verbose_command=None):
"""
:type args: TestConfig
:type paths: list[str]
:type verbose_command: str
:rtype: ChangeDescription
"""
mapper = PathMapper(args)
commands = {
'sanity': set(),
'units': set(),
'integration': set(),
'windows-integration': set(),
'network-integration': set(),
}
focused_commands = collections.defaultdict(set)
deleted_paths = set()
original_paths = set()
additional_paths = set()
no_integration_paths = set()
for path in paths:
if not os.path.exists(path):
deleted_paths.add(path)
continue
original_paths.add(path)
dependent_paths = mapper.get_dependent_paths(path)
if not dependent_paths:
continue
display.info('Expanded "%s" to %d dependent file(s):' % (path, len(dependent_paths)), verbosity=2)
for dependent_path in dependent_paths:
display.info(dependent_path, verbosity=2)
additional_paths.add(dependent_path)
additional_paths -= set(paths) # don't count changed paths as additional paths
if additional_paths:
display.info('Expanded %d changed file(s) into %d additional dependent file(s).' % (len(paths), len(additional_paths)))
paths = sorted(set(paths) | additional_paths)
display.info('Mapping %d changed file(s) to tests.' % len(paths))
none_count = 0
for path in paths:
tests = mapper.classify(path)
if tests is None:
focused_target = False
display.info('%s -> all' % path, verbosity=1)
tests = all_tests(args) # not categorized, run all tests
display.warning('Path not categorized: %s' % path)
else:
focused_target = tests.pop(FOCUSED_TARGET, False) and path in original_paths
tests = dict((key, value) for key, value in tests.items() if value)
if focused_target and not any('integration' in command for command in tests):
no_integration_paths.add(path) # path triggers no integration tests
if verbose_command:
result = '%s: %s' % (verbose_command, tests.get(verbose_command) or 'none')
# identify targeted integration tests (those which only target a single integration command)
if 'integration' in verbose_command and tests.get(verbose_command):
if not any('integration' in command for command in tests if command != verbose_command):
if focused_target:
result += ' (focused)'
result += ' (targeted)'
else:
result = '%s' % tests
if not tests.get(verbose_command):
# minimize excessive output from potentially thousands of files which do not trigger tests
none_count += 1
verbosity = 2
else:
verbosity = 1
if args.verbosity >= verbosity:
display.info('%s -> %s' % (path, result), verbosity=1)
for command, target in tests.items():
commands[command].add(target)
if focused_target:
focused_commands[command].add(target)
if none_count > 0 and args.verbosity < 2:
display.notice('Omitted %d file(s) that triggered no tests.' % none_count)
for command in commands:
commands[command].discard('none')
if any(target == 'all' for target in commands[command]):
commands[command] = set(['all'])
commands = dict((c, sorted(commands[c])) for c in commands if commands[c])
focused_commands = dict((c, sorted(focused_commands[c])) for c in focused_commands)
for command in commands:
if commands[command] == ['all']:
commands[command] = [] # changes require testing all targets, do not filter targets
changes = ChangeDescription()
changes.command = verbose_command
changes.changed_paths = sorted(original_paths)
changes.deleted_paths = sorted(deleted_paths)
changes.regular_command_targets = commands
changes.focused_command_targets = focused_commands
changes.no_integration_paths = sorted(no_integration_paths)
return changes
class PathMapper:
"""Map file paths to test commands and targets."""
def __init__(self, args):
"""
:type args: TestConfig
"""
self.args = args
self.integration_all_target = get_integration_all_target(self.args)
self.integration_targets = list(walk_integration_targets())
self.module_targets = list(walk_module_targets())
self.compile_targets = list(walk_compile_targets())
self.units_targets = list(walk_units_targets())
self.sanity_targets = list(walk_sanity_targets())
self.powershell_targets = [target for target in self.sanity_targets if os.path.splitext(target.path)[1] in ('.ps1', '.psm1')]
self.csharp_targets = [target for target in self.sanity_targets if os.path.splitext(target.path)[1] == '.cs']
self.units_modules = set(target.module for target in self.units_targets if target.module)
self.units_paths = set(a for target in self.units_targets for a in target.aliases)
self.sanity_paths = set(target.path for target in self.sanity_targets)
self.module_names_by_path = dict((target.path, target.module) for target in self.module_targets)
self.integration_targets_by_name = dict((target.name, target) for target in self.integration_targets)
self.integration_targets_by_alias = dict((a, target) for target in self.integration_targets for a in target.aliases)
self.posix_integration_by_module = dict((m, target.name) for target in self.integration_targets
if 'posix/' in target.aliases for m in target.modules)
self.windows_integration_by_module = dict((m, target.name) for target in self.integration_targets
if 'windows/' in target.aliases for m in target.modules)
self.network_integration_by_module = dict((m, target.name) for target in self.integration_targets
if 'network/' in target.aliases for m in target.modules)
self.prefixes = load_integration_prefixes()
self.integration_dependencies = analyze_integration_target_dependencies(self.integration_targets)
self.python_module_utils_imports = {} # populated on first use to reduce overhead when not needed
self.powershell_module_utils_imports = {} # populated on first use to reduce overhead when not needed
self.csharp_module_utils_imports = {} # populated on first use to reduce overhead when not needed
self.paths_to_dependent_targets = {}
for target in self.integration_targets:
for path in target.needs_file:
if path not in self.paths_to_dependent_targets:
self.paths_to_dependent_targets[path] = set()
self.paths_to_dependent_targets[path].add(target)
def get_dependent_paths(self, path):
"""
:type path: str
:rtype: list[str]
"""
unprocessed_paths = set(self.get_dependent_paths_non_recursive(path))
paths = set()
while unprocessed_paths:
queued_paths = list(unprocessed_paths)
paths |= unprocessed_paths
unprocessed_paths = set()
for queued_path in queued_paths:
new_paths = self.get_dependent_paths_non_recursive(queued_path)
for new_path in new_paths:
if new_path not in paths:
unprocessed_paths.add(new_path)
return sorted(paths)
def get_dependent_paths_non_recursive(self, path):
"""
:type path: str
:rtype: list[str]
"""
paths = self.get_dependent_paths_internal(path)
paths += [target.path + '/' for target in self.paths_to_dependent_targets.get(path, set())]
paths = sorted(set(paths))
return paths
def get_dependent_paths_internal(self, path):
"""
:type path: str
:rtype: list[str]
"""
ext = os.path.splitext(os.path.split(path)[1])[1]
if is_subdir(path, data_context().content.module_utils_path):
if ext == '.py':
return self.get_python_module_utils_usage(path)
if ext == '.psm1':
return self.get_powershell_module_utils_usage(path)
if ext == '.cs':
return self.get_csharp_module_utils_usage(path)
if is_subdir(path, data_context().content.integration_targets_path):
return self.get_integration_target_usage(path)
return []
def get_python_module_utils_usage(self, path):
"""
:type path: str
:rtype: list[str]
"""
if not self.python_module_utils_imports:
display.info('Analyzing python module_utils imports...')
before = time.time()
self.python_module_utils_imports = get_python_module_utils_imports(self.compile_targets)
after = time.time()
display.info('Processed %d python module_utils in %d second(s).' % (len(self.python_module_utils_imports), after - before))
name = get_python_module_utils_name(path)
return sorted(self.python_module_utils_imports[name])
def get_powershell_module_utils_usage(self, path):
"""
:type path: str
:rtype: list[str]
"""
if not self.powershell_module_utils_imports:
display.info('Analyzing powershell module_utils imports...')
before = time.time()
self.powershell_module_utils_imports = get_powershell_module_utils_imports(self.powershell_targets)
after = time.time()
display.info('Processed %d powershell module_utils in %d second(s).' % (len(self.powershell_module_utils_imports), after - before))
name = get_powershell_module_utils_name(path)
return sorted(self.powershell_module_utils_imports[name])
def get_csharp_module_utils_usage(self, path):
"""
:type path: str
:rtype: list[str]
"""
if not self.csharp_module_utils_imports:
display.info('Analyzing C# module_utils imports...')
before = time.time()
self.csharp_module_utils_imports = get_csharp_module_utils_imports(self.powershell_targets, self.csharp_targets)
after = time.time()
display.info('Processed %d C# module_utils in %d second(s).' % (len(self.csharp_module_utils_imports), after - before))
name = get_csharp_module_utils_name(path)
return sorted(self.csharp_module_utils_imports[name])
def get_integration_target_usage(self, path):
"""
:type path: str
:rtype: list[str]
"""
target_name = path.split('/')[3]
dependents = [os.path.join(data_context().content.integration_targets_path, target) + os.path.sep
for target in sorted(self.integration_dependencies.get(target_name, set()))]
return dependents
def classify(self, path):
"""
:type path: str
:rtype: dict[str, str] | None
"""
result = self._classify(path)
# run all tests when no result given
if result is None:
return None
# run sanity on path unless result specified otherwise
if path in self.sanity_paths and 'sanity' not in result:
result['sanity'] = path
return result
def _classify(self, path): # type: (str) -> t.Optional[t.Dict[str, str]]
"""Return the classification for the given path."""
if data_context().content.is_ansible:
return self._classify_ansible(path)
if data_context().content.collection:
return self._classify_collection(path)
return None
def _classify_common(self, path): # type: (str) -> t.Optional[t.Dict[str, str]]
"""Return the classification for the given path using rules common to all layouts."""
dirname = os.path.dirname(path)
filename = os.path.basename(path)
name, ext = os.path.splitext(filename)
minimal = {}
if os.path.sep not in path:
if filename in (
'azure-pipelines.yml',
'shippable.yml',
):
return all_tests(self.args) # test infrastructure, run all tests
if is_subdir(path, '.azure-pipelines'):
return all_tests(self.args) # test infrastructure, run all tests
if is_subdir(path, '.github'):
return minimal
if is_subdir(path, data_context().content.integration_targets_path):
if not os.path.exists(path):
return minimal
target = self.integration_targets_by_name.get(path.split('/')[3])
if not target:
display.warning('Unexpected non-target found: %s' % path)
return minimal
if 'hidden/' in target.aliases:
return minimal # already expanded using get_dependent_paths
return {
'integration': target.name if 'posix/' in target.aliases else None,
'windows-integration': target.name if 'windows/' in target.aliases else None,
'network-integration': target.name if 'network/' in target.aliases else None,
FOCUSED_TARGET: True,
}
if is_subdir(path, data_context().content.integration_path):
if dirname == data_context().content.integration_path:
for command in (
'integration',
'windows-integration',
'network-integration',
):
if name == command and ext == '.cfg':
return {
command: self.integration_all_target,
}
if name == command + '.requirements' and ext == '.txt':
return {
command: self.integration_all_target,
}
return {
'integration': self.integration_all_target,
'windows-integration': self.integration_all_target,
'network-integration': self.integration_all_target,
}
if is_subdir(path, data_context().content.sanity_path):
return {
'sanity': 'all', # test infrastructure, run all sanity checks
}
if is_subdir(path, data_context().content.unit_path):
if path in self.units_paths:
return {
'units': path,
}
# changes to files which are not unit tests should trigger tests from the nearest parent directory
test_path = os.path.dirname(path)
while test_path:
if test_path + '/' in self.units_paths:
return {
'units': test_path + '/',
}
test_path = os.path.dirname(test_path)
if is_subdir(path, data_context().content.module_path):
module_name = self.module_names_by_path.get(path)
if module_name:
return {
'units': module_name if module_name in self.units_modules else None,
'integration': self.posix_integration_by_module.get(module_name) if ext == '.py' else None,
'windows-integration': self.windows_integration_by_module.get(module_name) if ext in ['.cs', '.ps1'] else None,
'network-integration': self.network_integration_by_module.get(module_name),
FOCUSED_TARGET: True,
}
return minimal
if is_subdir(path, data_context().content.module_utils_path):
if ext == '.cs':
return minimal # already expanded using get_dependent_paths
if ext == '.psm1':
return minimal # already expanded using get_dependent_paths
if ext == '.py':
return minimal # already expanded using get_dependent_paths
if is_subdir(path, data_context().content.plugin_paths['action']):
if ext == '.py':
if name.startswith('net_'):
network_target = 'network/.*_%s' % name[4:]
if any(re.search(r'^%s$' % network_target, alias) for alias in self.integration_targets_by_alias):
return {
'network-integration': network_target,
'units': 'all',
}
return {
'network-integration': self.integration_all_target,
'units': 'all',
}
if self.prefixes.get(name) == 'network':
network_platform = name
elif name.endswith('_config') and self.prefixes.get(name[:-7]) == 'network':
network_platform = name[:-7]
elif name.endswith('_template') and self.prefixes.get(name[:-9]) == 'network':
network_platform = name[:-9]
else:
network_platform = None
if network_platform:
network_target = 'network/%s/' % network_platform
if network_target in self.integration_targets_by_alias:
return {
'network-integration': network_target,
'units': 'all',
}
display.warning('Integration tests for "%s" not found.' % network_target, unique=True)
return {
'units': 'all',
}
if is_subdir(path, data_context().content.plugin_paths['connection']):
if name == '__init__':
return {
'integration': self.integration_all_target,
'windows-integration': self.integration_all_target,
'network-integration': self.integration_all_target,
'units': 'test/units/plugins/connection/',
}
units_path = 'test/units/plugins/connection/test_%s.py' % name
if units_path not in self.units_paths:
units_path = None
integration_name = 'connection_%s' % name
if integration_name not in self.integration_targets_by_name:
integration_name = None
windows_integration_name = 'connection_windows_%s' % name
if windows_integration_name not in self.integration_targets_by_name:
windows_integration_name = None
# entire integration test commands depend on these connection plugins
if name in ['winrm', 'psrp']:
return {
'windows-integration': self.integration_all_target,
'units': units_path,
}
if name == 'local':
return {
'integration': self.integration_all_target,
'network-integration': self.integration_all_target,
'units': units_path,
}
if name == 'network_cli':
return {
'network-integration': self.integration_all_target,
'units': units_path,
}
if name == 'paramiko_ssh':
return {
'integration': integration_name,
'network-integration': self.integration_all_target,
'units': units_path,
}
# other connection plugins have isolated integration and unit tests
return {
'integration': integration_name,
'windows-integration': windows_integration_name,
'units': units_path,
}
if is_subdir(path, data_context().content.plugin_paths['doc_fragments']):
return {
'sanity': 'all',
}
if is_subdir(path, data_context().content.plugin_paths['inventory']):
if name == '__init__':
return all_tests(self.args) # broad impact, run all tests
# These inventory plugins are enabled by default (see INVENTORY_ENABLED).
# Without dedicated integration tests for these we must rely on the incidental coverage from other tests.
test_all = [
'host_list',
'script',
'yaml',
'ini',
'auto',
]
if name in test_all:
posix_integration_fallback = get_integration_all_target(self.args)
else:
posix_integration_fallback = None
target = self.integration_targets_by_name.get('inventory_%s' % name)
units_path = 'test/units/plugins/inventory/test_%s.py' % name
if units_path not in self.units_paths:
units_path = None
return {
'integration': target.name if target and 'posix/' in target.aliases else posix_integration_fallback,
'windows-integration': target.name if target and 'windows/' in target.aliases else None,
'network-integration': target.name if target and 'network/' in target.aliases else None,
'units': units_path,
FOCUSED_TARGET: target is not None,
}
if is_subdir(path, data_context().content.plugin_paths['filter']):
return self._simple_plugin_tests('filter', name)
if is_subdir(path, data_context().content.plugin_paths['lookup']):
return self._simple_plugin_tests('lookup', name)
if (is_subdir(path, data_context().content.plugin_paths['terminal']) or
is_subdir(path, data_context().content.plugin_paths['cliconf']) or
is_subdir(path, data_context().content.plugin_paths['netconf'])):
if ext == '.py':
if name in self.prefixes and self.prefixes[name] == 'network':
network_target = 'network/%s/' % name
if network_target in self.integration_targets_by_alias:
return {
'network-integration': network_target,
'units': 'all',
}
display.warning('Integration tests for "%s" not found.' % network_target, unique=True)
return {
'units': 'all',
}
return {
'network-integration': self.integration_all_target,
'units': 'all',
}
if is_subdir(path, data_context().content.plugin_paths['test']):
return self._simple_plugin_tests('test', name)
return None
def _classify_collection(self, path): # type: (str) -> t.Optional[t.Dict[str, str]]
"""Return the classification for the given path using rules specific to collections."""
result = self._classify_common(path)
if result is not None:
return result
filename = os.path.basename(path)
dummy, ext = os.path.splitext(filename)
minimal = {}
if path.startswith('changelogs/'):
return minimal
if path.startswith('docs/'):
return minimal
if '/' not in path:
if path in (
'.gitignore',
'COPYING',
'LICENSE',
'Makefile',
):
return minimal
if ext in (
'.in',
'.md',
'.rst',
'.toml',
'.txt',
):
return minimal
return None
def _classify_ansible(self, path): # type: (str) -> t.Optional[t.Dict[str, str]]
"""Return the classification for the given path using rules specific to Ansible."""
if path.startswith('test/units/compat/'):
return {
'units': 'test/units/',
}
result = self._classify_common(path)
if result is not None:
return result
dirname = os.path.dirname(path)
filename = os.path.basename(path)
name, ext = os.path.splitext(filename)
minimal = {}
if path.startswith('bin/'):
return all_tests(self.args) # broad impact, run all tests
if path.startswith('changelogs/'):
return minimal
if path.startswith('contrib/'):
return {
'units': 'test/units/contrib/'
}
if path.startswith('docs/'):
return minimal
if path.startswith('examples/'):
if path == 'examples/scripts/ConfigureRemotingForAnsible.ps1':
return {
'windows-integration': 'connection_winrm',
}
return minimal
if path.startswith('hacking/'):
return minimal
if path.startswith('lib/ansible/executor/powershell/'):
units_path = 'test/units/executor/powershell/'
if units_path not in self.units_paths:
units_path = None
return {
'windows-integration': self.integration_all_target,
'units': units_path,
}
if path.startswith('lib/ansible/'):
return all_tests(self.args) # broad impact, run all tests
if path.startswith('licenses/'):
return minimal
if path.startswith('packaging/'):
if path.startswith('packaging/requirements/'):
if name.startswith('requirements-') and ext == '.txt':
component = name.split('-', 1)[1]
candidates = (
'cloud/%s/' % component,
)
for candidate in candidates:
if candidate in self.integration_targets_by_alias:
return {
'integration': candidate,
}
return all_tests(self.args) # broad impact, run all tests
return minimal
if path.startswith('test/ansible_test/'):
return minimal # these tests are not invoked from ansible-test
if path.startswith('test/lib/ansible_test/config/'):
if name.startswith('cloud-config-'):
# noinspection PyTypeChecker
cloud_target = 'cloud/%s/' % name.split('-')[2].split('.')[0]
if cloud_target in self.integration_targets_by_alias:
return {
'integration': cloud_target,
}
if path.startswith('test/lib/ansible_test/_data/completion/'):
if path == 'test/lib/ansible_test/_data/completion/docker.txt':
return all_tests(self.args, force=True) # force all tests due to risk of breaking changes in new test environment
if path.startswith('test/lib/ansible_test/_internal/cloud/'):
cloud_target = 'cloud/%s/' % name
if cloud_target in self.integration_targets_by_alias:
return {
'integration': cloud_target,
}
return all_tests(self.args) # test infrastructure, run all tests
if path.startswith('test/lib/ansible_test/_internal/sanity/'):
return {
'sanity': 'all', # test infrastructure, run all sanity checks
'integration': 'ansible-test', # run ansible-test self tests
}
if path.startswith('test/lib/ansible_test/_data/sanity/'):
return {
'sanity': 'all', # test infrastructure, run all sanity checks
'integration': 'ansible-test', # run ansible-test self tests
}
if path.startswith('test/lib/ansible_test/_internal/units/'):
return {
'units': 'all', # test infrastructure, run all unit tests
'integration': 'ansible-test', # run ansible-test self tests
}
if path.startswith('test/lib/ansible_test/_data/units/'):
return {
'units': 'all', # test infrastructure, run all unit tests
'integration': 'ansible-test', # run ansible-test self tests
}
if path.startswith('test/lib/ansible_test/_data/pytest/'):
return {
'units': 'all', # test infrastructure, run all unit tests
'integration': 'ansible-test', # run ansible-test self tests
}
if path.startswith('test/lib/ansible_test/_data/requirements/'):
if name in (
'integration',
'network-integration',
'windows-integration',
):
return {
name: self.integration_all_target,
}
if name in (
'sanity',
'units',
):
return {
name: 'all',
}
if name.startswith('integration.cloud.'):
cloud_target = 'cloud/%s/' % name.split('.')[2]
if cloud_target in self.integration_targets_by_alias:
return {
'integration': cloud_target,
}
if path.startswith('test/lib/'):
return all_tests(self.args) # test infrastructure, run all tests
if path.startswith('test/support/'):
return all_tests(self.args) # test infrastructure, run all tests
if path.startswith('test/utils/shippable/'):
if dirname == 'test/utils/shippable':
test_map = {
'cloud.sh': 'integration:cloud/',
'linux.sh': 'integration:all',
'network.sh': 'network-integration:all',
'remote.sh': 'integration:all',
'sanity.sh': 'sanity:all',
'units.sh': 'units:all',
'windows.sh': 'windows-integration:all',
}
test_match = test_map.get(filename)
if test_match:
test_command, test_target = test_match.split(':')
return {
test_command: test_target,
}
cloud_target = 'cloud/%s/' % name
if cloud_target in self.integration_targets_by_alias:
return {
'integration': cloud_target,
}
return all_tests(self.args) # test infrastructure, run all tests
if path.startswith('test/utils/'):
return minimal
if '/' not in path:
if path in (
'.gitattributes',
'.gitignore',
'.mailmap',
'COPYING',
'Makefile',
):
return minimal
if path in (
'setup.py',
):
return all_tests(self.args) # broad impact, run all tests
if ext in (
'.in',
'.md',
'.rst',
'.toml',
'.txt',
):
return minimal
return None # unknown, will result in fall-back to run all tests
def _simple_plugin_tests(self, plugin_type, plugin_name): # type: (str, str) -> t.Dict[str, t.Optional[str]]
"""
Return tests for the given plugin type and plugin name.
This function is useful for plugin types which do not require special processing.
"""
if plugin_name == '__init__':
return all_tests(self.args, True)
integration_target = self.integration_targets_by_name.get('%s_%s' % (plugin_type, plugin_name))
if integration_target:
integration_name = integration_target.name
else:
integration_name = None
units_path = os.path.join(data_context().content.unit_path, 'plugins', plugin_type, 'test_%s.py' % plugin_name)
if units_path not in self.units_paths:
units_path = None
return dict(
integration=integration_name,
units=units_path,
)
def all_tests(args, force=False):
"""
:type args: TestConfig
:type force: bool
:rtype: dict[str, str]
"""
if force:
integration_all_target = 'all'
else:
integration_all_target = get_integration_all_target(args)
return {
'sanity': 'all',
'units': 'all',
'integration': integration_all_target,
'windows-integration': integration_all_target,
'network-integration': integration_all_target,
}
def get_integration_all_target(args):
"""
:type args: TestConfig
:rtype: str
"""
if isinstance(args, IntegrationConfig):
return args.changed_all_target
return 'all'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,875 |
Setup facts - RHV 4.4.3 virtualization
|
##### SUMMARY
<!--- Explain the problem briefly below -->
There were some changes in the RHV engine which changed sys_vendor. So now the setup facts are not correct.
(https://bugzilla.redhat.com/show_bug.cgi?id=1904085)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
setup - virtualization facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mnecas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
You need to have RHV 4.4.3 and some VM inside on which you will run:
`ansible -m setup host -k -u root | grep virtualization`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`ansible -m setup host -k -u root | grep virtualization`
```
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "RHV",
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
```
|
https://github.com/ansible/ansible/issues/72875
|
https://github.com/ansible/ansible/pull/72876
|
9ec4e08534209a2076023489c4a3f50f464d3b7a
|
7099a5f4483a3bccd67bee63e9deac242f560486
| 2020-12-07T11:39:09Z |
python
| 2021-03-15T09:50:44Z |
changelogs/fragments/72876-setup-facts-add-redhat-vendor.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,875 |
Setup facts - RHV 4.4.3 virtualization
|
##### SUMMARY
<!--- Explain the problem briefly below -->
There were some changes in the RHV engine which changed sys_vendor. So now the setup facts are not correct.
(https://bugzilla.redhat.com/show_bug.cgi?id=1904085)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
setup - virtualization facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mnecas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
You need to have RHV 4.4.3 and some VM inside on which you will run:
`ansible -m setup host -k -u root | grep virtualization`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`ansible -m setup host -k -u root | grep virtualization`
```
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "RHV",
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
```
|
https://github.com/ansible/ansible/issues/72875
|
https://github.com/ansible/ansible/pull/72876
|
9ec4e08534209a2076023489c4a3f50f464d3b7a
|
7099a5f4483a3bccd67bee63e9deac242f560486
| 2020-12-07T11:39:09Z |
python
| 2021-03-15T09:50:44Z |
lib/ansible/module_utils/facts/virtual/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import re
from ansible.module_utils.facts.virtual.base import Virtual, VirtualCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines
class LinuxVirtual(Virtual):
"""
This is a Linux-specific subclass of Virtual. It defines
- virtualization_type
- virtualization_role
"""
platform = 'Linux'
# For more information, check: http://people.redhat.com/~rjones/virt-what/
def get_virtual_facts(self):
virtual_facts = {}
# We want to maintain compatibility with the old "virtualization_type"
# and "virtualization_role" entries, so we need to track if we found
# them. We won't return them until the end, but if we found them early,
# we should avoid updating them again.
found_virt = False
# But as we go along, we also want to track virt tech the new way.
host_tech = set()
guest_tech = set()
# lxc/docker
if os.path.exists('/proc/1/cgroup'):
for line in get_file_lines('/proc/1/cgroup'):
if re.search(r'/docker(/|-[0-9a-f]+\.scope)', line):
guest_tech.add('docker')
if not found_virt:
virtual_facts['virtualization_type'] = 'docker'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/lxc/', line) or re.search('/machine.slice/machine-lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/system.slice/containerd.service', line):
guest_tech.add('containerd')
if not found_virt:
virtual_facts['virtualization_type'] = 'containerd'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# lxc does not always appear in cgroups anymore but sets 'container=lxc' environment var, requires root privs
if os.path.exists('/proc/1/environ'):
for line in get_file_lines('/proc/1/environ', line_sep='\x00'):
if re.search('container=lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('container=podman', line):
guest_tech.add('podman')
if not found_virt:
virtual_facts['virtualization_type'] = 'podman'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('^container=.', line):
guest_tech.add('container')
if not found_virt:
virtual_facts['virtualization_type'] = 'container'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/vz') and not os.path.exists('/proc/lve'):
virtual_facts['virtualization_type'] = 'openvz'
if os.path.exists('/proc/bc'):
host_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
systemd_container = get_file_content('/run/systemd/container')
if systemd_container:
guest_tech.add(systemd_container)
if not found_virt:
virtual_facts['virtualization_type'] = systemd_container
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# ensure 'container' guest_tech is appropriately set
if guest_tech.intersection(set(['docker', 'lxc', 'podman', 'openvz', 'containerd'])) or systemd_container:
guest_tech.add('container')
if os.path.exists("/proc/xen"):
is_xen_host = False
try:
for line in get_file_lines('/proc/xen/capabilities'):
if "control_d" in line:
is_xen_host = True
except IOError:
pass
if is_xen_host:
host_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'host'
else:
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# assume guest for this block
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name')
if product_name in ('KVM', 'KVM Server', 'Bochs', 'AHV'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if product_name == 'RHEV Hypervisor':
guest_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
found_virt = True
if product_name in ('VMware Virtual Platform', 'VMware7,1'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
found_virt = True
if product_name in ('OpenStack Compute', 'OpenStack Nova'):
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor')
if bios_vendor == 'Xen':
guest_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
found_virt = True
if bios_vendor == 'innotek GmbH':
guest_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
found_virt = True
if bios_vendor in ('Amazon EC2', 'DigitalOcean', 'Hetzner'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor')
KVM_SYS_VENDORS = ('QEMU', 'oVirt', 'Amazon EC2', 'DigitalOcean', 'Google', 'Scaleway', 'Nutanix')
if sys_vendor in KVM_SYS_VENDORS:
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if sys_vendor == 'KubeVirt':
guest_tech.add('KubeVirt')
if not found_virt:
virtual_facts['virtualization_type'] = 'KubeVirt'
found_virt = True
# FIXME: This does also match hyperv
if sys_vendor == 'Microsoft Corporation':
guest_tech.add('VirtualPC')
if not found_virt:
virtual_facts['virtualization_type'] = 'VirtualPC'
found_virt = True
if sys_vendor == 'Parallels Software International Inc.':
guest_tech.add('parallels')
if not found_virt:
virtual_facts['virtualization_type'] = 'parallels'
found_virt = True
if sys_vendor == 'OpenStack Foundation':
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
# unassume guest
if not found_virt:
del virtual_facts['virtualization_role']
if os.path.exists('/proc/self/status'):
for line in get_file_lines('/proc/self/status'):
if re.match(r'^VxID:\s+\d+', line):
if not found_virt:
virtual_facts['virtualization_type'] = 'linux_vserver'
if re.match(r'^VxID:\s+0', line):
host_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/cpuinfo'):
for line in get_file_lines('/proc/cpuinfo'):
if re.match('^model name.*QEMU Virtual CPU', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*User Mode Linux', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^model name.*UML', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^machine.*CHRP IBM pSeries .emulated by qemu.', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*PowerVM Lx86', line):
guest_tech.add('powervm_lx86')
if not found_virt:
virtual_facts['virtualization_type'] = 'powervm_lx86'
elif re.match('^vendor_id.*IBM/S390', line):
guest_tech.add('PR/SM')
if not found_virt:
virtual_facts['virtualization_type'] = 'PR/SM'
lscpu = self.module.get_bin_path('lscpu')
if lscpu:
rc, out, err = self.module.run_command(["lscpu"])
if rc == 0:
for line in out.splitlines():
data = line.split(":", 1)
key = data[0].strip()
if key == 'Hypervisor':
tech = data[1].strip()
guest_tech.add(tech)
if not found_virt:
virtual_facts['virtualization_type'] = tech
else:
guest_tech.add('ibm_systemz')
if not found_virt:
virtual_facts['virtualization_type'] = 'ibm_systemz'
else:
continue
if virtual_facts['virtualization_type'] == 'PR/SM':
if not found_virt:
virtual_facts['virtualization_role'] = 'LPAR'
else:
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
if not found_virt:
found_virt = True
# Beware that we can have both kvm and virtualbox running on a single system
if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK):
modules = []
for line in get_file_lines("/proc/modules"):
data = line.split(" ", 1)
modules.append(data[0])
if 'kvm' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
if os.path.isdir('/rhev/'):
# Check whether this is a RHEV hypervisor (is vdsm running ?)
for f in glob.glob('/proc/[0-9]*/comm'):
try:
with open(f) as virt_fh:
comm_content = virt_fh.read().rstrip()
if comm_content in ('vdsm', 'vdsmd'):
# We add both kvm and RHEV to host_tech in this case.
# It's accurate. RHEV uses KVM.
host_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
break
except Exception:
pass
found_virt = True
if 'vboxdrv' in modules:
host_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
if 'virtio' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# In older Linux Kernel versions, /sys filesystem is not available
# dmidecode is the safest option to parse virtualization related values
dmi_bin = self.module.get_bin_path('dmidecode')
# We still want to continue even if dmidecode is not available
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s system-product-name' % dmi_bin)
if rc == 0:
# Strip out commented lines (specific dmidecode output)
vendor_name = ''.join([line.strip() for line in out.splitlines() if not line.startswith('#')])
if vendor_name.startswith('VMware'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if 'BHYVE' in out:
guest_tech.add('bhyve')
if not found_virt:
virtual_facts['virtualization_type'] = 'bhyve'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/dev/kvm'):
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
# If none of the above matches, return 'NA' for virtualization_type
# and virtualization_role. This allows for proper grouping.
if not found_virt:
virtual_facts['virtualization_type'] = 'NA'
virtual_facts['virtualization_role'] = 'NA'
found_virt = True
virtual_facts['virtualization_tech_guest'] = guest_tech
virtual_facts['virtualization_tech_host'] = host_tech
return virtual_facts
class LinuxVirtualCollector(VirtualCollector):
_fact_class = LinuxVirtual
_platform = 'Linux'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
changelogs/fragments/73809-search-handler-subdir.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
lib/ansible/playbook/included_file.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_text
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class IncludedFile:
def __init__(self, filename, args, vars, task, is_role=False):
self._filename = filename
self._args = args
self._vars = vars
self._task = task
self._hosts = []
self._is_role = is_role
def add_host(self, host):
if host not in self._hosts:
self._hosts.append(host)
return
raise ValueError()
def __eq__(self, other):
return (other._filename == self._filename and
other._args == self._args and
other._vars == self._vars and
other._task._uuid == self._task._uuid and
other._task._parent._uuid == self._task._parent._uuid)
def __repr__(self):
return "%s (args=%s vars=%s): %s" % (self._filename, self._args, self._vars, self._hosts)
@staticmethod
def process_include_results(results, iterator, loader, variable_manager):
included_files = []
task_vars_cache = {}
for res in results:
original_host = res._host
original_task = res._task
if original_task.action in C._ACTION_ALL_INCLUDES:
if original_task.loop:
if 'results' not in res._result:
continue
include_results = res._result['results']
else:
include_results = [res._result]
for include_result in include_results:
# if the task result was skipped or failed, continue
if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result and include_result['failed']:
continue
cache_key = (iterator._play, original_host, original_task)
try:
task_vars = task_vars_cache[cache_key]
except KeyError:
task_vars = task_vars_cache[cache_key] = variable_manager.get_vars(play=iterator._play, host=original_host, task=original_task)
include_args = include_result.get('include_args', dict())
special_vars = {}
loop_var = include_result.get('ansible_loop_var', 'item')
index_var = include_result.get('ansible_index_var')
if loop_var in include_result:
task_vars[loop_var] = special_vars[loop_var] = include_result[loop_var]
if index_var and index_var in include_result:
task_vars[index_var] = special_vars[index_var] = include_result[index_var]
if '_ansible_item_label' in include_result:
task_vars['_ansible_item_label'] = special_vars['_ansible_item_label'] = include_result['_ansible_item_label']
if 'ansible_loop' in include_result:
task_vars['ansible_loop'] = special_vars['ansible_loop'] = include_result['ansible_loop']
if original_task.no_log and '_ansible_no_log' not in include_args:
task_vars['_ansible_no_log'] = special_vars['_ansible_no_log'] = original_task.no_log
# get search path for this task to pass to lookup plugins that may be used in pathing to
# the included file
task_vars['ansible_search_path'] = original_task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if loader.get_basedir() not in task_vars['ansible_search_path']:
task_vars['ansible_search_path'].append(loader.get_basedir())
templar = Templar(loader=loader, variables=task_vars)
if original_task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_file = None
if original_task:
if original_task.static:
continue
if original_task._parent:
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = original_task._parent
cumulative_path = None
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
if isinstance(parent_include, IncludeRole):
parent_include_dir = parent_include._role_path
else:
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleError as e:
parent_include_dir = ''
display.warning(
'Templating the path of the parent %s failed. The path to the '
'included file may not be found. '
'The error was: %s.' % (original_task.action, to_text(e))
)
if cumulative_path is not None and not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
else:
cumulative_path = parent_include_dir
include_target = templar.template(include_result['include'])
if original_task._role:
new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path)
candidates = [loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target),
loader.path_dwim_relative(new_basedir, 'tasks', include_target)]
for include_file in candidates:
try:
# may throw OSError
os.stat(include_file)
# or select the task file if it exists
break
except OSError:
pass
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
break
else:
parent_include = parent_include._parent
if include_file is None:
if original_task._role:
include_target = templar.template(include_result['include'])
include_file = loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target)
else:
include_file = loader.path_dwim(include_result['include'])
include_file = templar.template(include_file)
inc_file = IncludedFile(include_file, include_args, special_vars, original_task)
else:
# template the included role's name here
role_name = include_args.pop('name', include_args.pop('role', None))
if role_name is not None:
role_name = templar.template(role_name)
new_task = original_task.copy()
new_task._role_name = role_name
for from_arg in new_task.FROM_ARGS:
if from_arg in include_args:
from_key = from_arg.replace('_from', '')
new_task._from_files[from_key] = templar.template(include_args.pop(from_arg))
inc_file = IncludedFile(role_name, include_args, special_vars, new_task, is_role=True)
idx = 0
orig_inc_file = inc_file
while 1:
try:
pos = included_files[idx:].index(orig_inc_file)
# pos is relative to idx since we are slicing
# use idx + pos due to relative indexing
inc_file = included_files[idx + pos]
except ValueError:
included_files.append(orig_inc_file)
inc_file = orig_inc_file
try:
inc_file.add_host(original_host)
except ValueError:
# The host already exists for this include, advance forward, this is a new include
idx += pos + 1
else:
break
return included_files
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
test/integration/targets/handlers/roles/test_role_handlers_include_tasks/handlers/A.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
test/integration/targets/handlers/roles/test_role_handlers_include_tasks/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
test/integration/targets/handlers/roles/test_role_handlers_include_tasks/tasks/B.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*?]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*?]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*?]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*?]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,222 |
include in handler requires full path
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
To include tasks for handlers I have to specify full path instead of relative to the role's directory
Similar to: https://github.com/ansible/ansible/issues/19609
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`lib/ansible/playbook/included_file.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /srv/data/ansible-dev/ansible.cfg
configured module search path = ['/home/spider/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /srv/app/prod/ansible/core/ansible-2.9/lib64/python3.6/site-packages/ansible
executable location = /srv/app/prod/ansible/core/ansible-2.9/bin/ansible
python version = 3.6.3 (default, Apr 10 2019, 14:37:36) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RedHat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
in role's handlers directory I have:
cat main.yml
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
The full path works ok:
```yaml
---
# handlers file for sshd
- name: Include proper handlers for services
include_tasks: "{{ role_path }}/handlers/{{ ansible_distribution }}{{ ansible_distribution_major_version }}/handlers-services.yml"
listen: 'Handle sshd services'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tasks should be included from path relative to role
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Include fails as ansible tries to include handlers-services.yml directly from playbook directory
<!--- Paste verbatim command output between quotes -->
```paste below
RUNNING HANDLER [sshd : Include proper handlers] ***************************************************************************************************************************************
fatal: [tpinsas03]: FAILED! => {"reason": "Could not find or access '/srv/data/ansible-dev/playbooks/RedHat7/handlers-services.yml' on the Ansible Controller."}
to retry, use: --limit @/srv/data/ansible-dev/retry/cm.retry
```
|
https://github.com/ansible/ansible/issues/71222
|
https://github.com/ansible/ansible/pull/73809
|
309802214616b3981a4d2a6b3bdcf4077bd0e062
|
1e5ccb326fa49046da7a06e42734836b99e2d5cc
| 2020-08-12T08:41:52Z |
python
| 2021-03-15T19:39:59Z |
test/integration/targets/handlers/test_role_handlers_including_tasks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,708 |
ansible-pull cannot run multiple playbooks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When providing `ansible-pull` with multiple playbooks (e.g. `ansible-pull [...] playbook1.yml playbook2.yml`), only the first playbook is run.
This is contrary to what the documentation states is possible:
* https://docs.ansible.com/ansible/2.10/cli/ansible-pull.html - the usage block in the synopsis section shows multiple playbooks (`[playbook.yml [playbook.yml ...]]`), just like `ansible-playbook`.
The `argparse` options indicate support for multiple playbooks, and call the arguments `Playbook(s)` - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L86
However, the code only ever seems to handle a single playbook - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L314-L320
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-pull
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.13
config file = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Aug 27 2020, 21:23:25) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = True
DEFAULT_FORKS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 25
DEFAULT_HOST_LIST(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = [u'/root/.ansible/pull/ip-x-x-x-x.ec2.internal/hosts.ini']
DEFAULT_LOG_PATH(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.log
DEFAULT_POLL_INTERVAL(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 5
DEFAULT_STRATEGY(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = linear
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Amazon Linux 2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-pull` with multiple playbooks, for example:
```bash
ansible-pull --url [email protected]:NFarrington/demo-ansible-pull-multiple-playbooks.git --inventory 127.0.0.1, playbook1.yml playbook2.yml
```
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1:
```yaml
---
- name: Playbook1
hosts: all
tasks:
- name: playbook1 debug message
debug:
msg: Hello from playbook1
```
Playbook 2:
```yaml
---
- name: Playbook2
hosts: all
tasks:
- name: playbook2 debug message
debug:
msg: Hello from playbook2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The same behaviour as `ansible-playbook` is expected:
```
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PLAY [Playbook2] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook2 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook2"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Only the first playbook is executed:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72708
|
https://github.com/ansible/ansible/pull/73172
|
0279d0298062956a220c524f51e1bc0b2db7feb1
|
4add72310764d1f64a6a60eef89c72736f1528c5
| 2020-11-23T00:43:59Z |
python
| 2021-03-17T17:52:51Z |
changelogs/fragments/72708_ansible_pull_multiple_playbooks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,708 |
ansible-pull cannot run multiple playbooks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When providing `ansible-pull` with multiple playbooks (e.g. `ansible-pull [...] playbook1.yml playbook2.yml`), only the first playbook is run.
This is contrary to what the documentation states is possible:
* https://docs.ansible.com/ansible/2.10/cli/ansible-pull.html - the usage block in the synopsis section shows multiple playbooks (`[playbook.yml [playbook.yml ...]]`), just like `ansible-playbook`.
The `argparse` options indicate support for multiple playbooks, and call the arguments `Playbook(s)` - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L86
However, the code only ever seems to handle a single playbook - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L314-L320
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-pull
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.13
config file = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Aug 27 2020, 21:23:25) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = True
DEFAULT_FORKS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 25
DEFAULT_HOST_LIST(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = [u'/root/.ansible/pull/ip-x-x-x-x.ec2.internal/hosts.ini']
DEFAULT_LOG_PATH(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.log
DEFAULT_POLL_INTERVAL(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 5
DEFAULT_STRATEGY(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = linear
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Amazon Linux 2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-pull` with multiple playbooks, for example:
```bash
ansible-pull --url [email protected]:NFarrington/demo-ansible-pull-multiple-playbooks.git --inventory 127.0.0.1, playbook1.yml playbook2.yml
```
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1:
```yaml
---
- name: Playbook1
hosts: all
tasks:
- name: playbook1 debug message
debug:
msg: Hello from playbook1
```
Playbook 2:
```yaml
---
- name: Playbook2
hosts: all
tasks:
- name: playbook2 debug message
debug:
msg: Hello from playbook2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The same behaviour as `ansible-playbook` is expected:
```
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PLAY [Playbook2] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook2 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook2"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Only the first playbook is executed:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72708
|
https://github.com/ansible/ansible/pull/73172
|
0279d0298062956a220c524f51e1bc0b2db7feb1
|
4add72310764d1f64a6a60eef89c72736f1528c5
| 2020-11-23T00:43:59Z |
python
| 2021-03-17T17:52:51Z |
lib/ansible/cli/pull.py
|
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import datetime
import os
import platform
import random
import shutil
import socket
import sys
import time
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleOptionsError
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six.moves import shlex_quote
from ansible.plugins.loader import module_loader
from ansible.utils.cmd_functions import run_cmd
from ansible.utils.display import Display
display = Display()
class PullCLI(CLI):
''' Used to pull a remote copy of ansible on each managed node,
each set to run via cron and update playbook source via a source repository.
This inverts the default *push* architecture of ansible into a *pull* architecture,
which has near-limitless scaling potential.
The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull.
This is useful both for extreme scale-out as well as periodic remediation.
Usage of the 'fetch' module to retrieve logs from ansible-pull runs would be an
excellent way to gather and analyze remote logs from ansible-pull.
'''
DEFAULT_REPO_TYPE = 'git'
DEFAULT_PLAYBOOK = 'local.yml'
REPO_CHOICES = ('git', 'subversion', 'hg', 'bzr')
PLAYBOOK_ERRORS = {
1: 'File does not exist',
2: 'File is not readable',
}
SUPPORTED_REPO_MODULES = ['git']
ARGUMENTS = {'playbook.yml': 'The name of one the YAML format files to run as an Ansible playbook.'
'This can be a relative path within the checkout. By default, Ansible will'
"look for a playbook based on the host's fully-qualified domain name,"
'on the host hostname and finally a playbook named *local.yml*.', }
SKIP_INVENTORY_DEFAULTS = True
@staticmethod
def _get_inv_cli():
inv_opts = ''
if context.CLIARGS.get('inventory', False):
for inv in context.CLIARGS['inventory']:
if isinstance(inv, list):
inv_opts += " -i '%s' " % ','.join(inv)
elif ',' in inv or os.path.exists(inv):
inv_opts += ' -i %s ' % inv
return inv_opts
def init_parser(self):
''' create an options parser for bin/ansible '''
super(PullCLI, self).init_parser(
usage='%prog -U <repository> [options] [<playbook.yml>]',
desc="pulls playbooks from a VCS repo and executes them for the local host")
# Do not add check_options as there's a conflict with --checkout/-C
opt_help.add_connect_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_runtask_options(self.parser)
opt_help.add_subset_options(self.parser)
opt_help.add_inventory_options(self.parser)
opt_help.add_module_options(self.parser)
opt_help.add_runas_prompt_options(self.parser)
self.parser.add_argument('args', help='Playbook(s)', metavar='playbook.yml', nargs='*')
# options unique to pull
self.parser.add_argument('--purge', default=False, action='store_true', help='purge checkout after playbook run')
self.parser.add_argument('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
help='only run the playbook if the repository has been updated')
self.parser.add_argument('-s', '--sleep', dest='sleep', default=None,
help='sleep for random interval (between 0 and n number of seconds) before starting. '
'This is a useful way to disperse git requests')
self.parser.add_argument('-f', '--force', dest='force', default=False, action='store_true',
help='run the playbook even if the repository could not be updated')
self.parser.add_argument('-d', '--directory', dest='dest', default=None, help='directory to checkout repository to')
self.parser.add_argument('-U', '--url', dest='url', default=None, help='URL of the playbook repository')
self.parser.add_argument('--full', dest='fullclone', action='store_true', help='Do a full clone, instead of a shallow one.')
self.parser.add_argument('-C', '--checkout', dest='checkout',
help='branch/tag/commit to checkout. Defaults to behavior of repository module.')
self.parser.add_argument('--accept-host-key', default=False, dest='accept_host_key', action='store_true',
help='adds the hostkey for the repo url if not already added')
self.parser.add_argument('-m', '--module-name', dest='module_name', default=self.DEFAULT_REPO_TYPE,
help='Repository module name, which ansible will use to check out the repo. Choices are %s. Default is %s.'
% (self.REPO_CHOICES, self.DEFAULT_REPO_TYPE))
self.parser.add_argument('--verify-commit', dest='verify', default=False, action='store_true',
help='verify GPG signature of checked out commit, if it fails abort running the playbook. '
'This needs the corresponding VCS module to support such an operation')
self.parser.add_argument('--clean', dest='clean', default=False, action='store_true',
help='modified files in the working repository will be discarded')
self.parser.add_argument('--track-subs', dest='tracksubs', default=False, action='store_true',
help='submodules will track the latest changes. This is equivalent to specifying the --remote flag to git submodule update')
# add a subset of the check_opts flag group manually, as the full set's
# shortcodes conflict with above --checkout/-C
self.parser.add_argument("--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
self.parser.add_argument("--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those files; works great with --check")
def post_process_args(self, options):
options = super(PullCLI, self).post_process_args(options)
if not options.dest:
hostname = socket.getfqdn()
# use a hostname dependent directory, in case of $HOME on nfs
options.dest = os.path.join('~/.ansible/pull', hostname)
options.dest = os.path.expandvars(os.path.expanduser(options.dest))
if os.path.exists(options.dest) and not os.path.isdir(options.dest):
raise AnsibleOptionsError("%s is not a valid or accessible directory." % options.dest)
if options.sleep:
try:
secs = random.randint(0, int(options.sleep))
options.sleep = secs
except ValueError:
raise AnsibleOptionsError("%s is not a number." % options.sleep)
if not options.url:
raise AnsibleOptionsError("URL for repository not specified, use -h for help")
if options.module_name not in self.SUPPORTED_REPO_MODULES:
raise AnsibleOptionsError("Unsupported repo module %s, choices are %s" % (options.module_name, ','.join(self.SUPPORTED_REPO_MODULES)))
display.verbosity = options.verbosity
self.validate_conflicts(options)
return options
def run(self):
''' use Runner lib to do SSH things '''
super(PullCLI, self).run()
# log command line
now = datetime.datetime.now()
display.display(now.strftime("Starting Ansible Pull at %F %T"))
display.display(' '.join(sys.argv))
# Build Checkout command
# Now construct the ansible command
node = platform.node()
host = socket.getfqdn()
limit_opts = 'localhost,%s,127.0.0.1' % ','.join(set([host, node, host.split('.')[0], node.split('.')[0]]))
base_opts = '-c local '
if context.CLIARGS['verbosity'] > 0:
base_opts += ' -%s' % ''.join(["v" for x in range(0, context.CLIARGS['verbosity'])])
# Attempt to use the inventory passed in as an argument
# It might not yet have been downloaded so use localhost as default
inv_opts = self._get_inv_cli()
if not inv_opts:
inv_opts = " -i localhost, "
# avoid interpreter discovery since we already know which interpreter to use on localhost
inv_opts += '-e %s ' % shlex_quote('ansible_python_interpreter=%s' % sys.executable)
# SCM specific options
if context.CLIARGS['module_name'] == 'git':
repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' version=%s' % context.CLIARGS['checkout']
if context.CLIARGS['accept_host_key']:
repo_opts += ' accept_hostkey=yes'
if context.CLIARGS['private_key_file']:
repo_opts += ' key_file=%s' % context.CLIARGS['private_key_file']
if context.CLIARGS['verify']:
repo_opts += ' verify_commit=yes'
if context.CLIARGS['tracksubs']:
repo_opts += ' track_submodules=yes'
if not context.CLIARGS['fullclone']:
repo_opts += ' depth=1'
elif context.CLIARGS['module_name'] == 'subversion':
repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' revision=%s' % context.CLIARGS['checkout']
if not context.CLIARGS['fullclone']:
repo_opts += ' export=yes'
elif context.CLIARGS['module_name'] == 'hg':
repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' revision=%s' % context.CLIARGS['checkout']
elif context.CLIARGS['module_name'] == 'bzr':
repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' version=%s' % context.CLIARGS['checkout']
else:
raise AnsibleOptionsError('Unsupported (%s) SCM module for pull, choices are: %s'
% (context.CLIARGS['module_name'],
','.join(self.REPO_CHOICES)))
# options common to all supported SCMS
if context.CLIARGS['clean']:
repo_opts += ' force=yes'
path = module_loader.find_plugin(context.CLIARGS['module_name'])
if path is None:
raise AnsibleOptionsError(("module '%s' not found.\n" % context.CLIARGS['module_name']))
bin_path = os.path.dirname(os.path.abspath(sys.argv[0]))
# hardcode local and inventory/host as this is just meant to fetch the repo
cmd = '%s/ansible %s %s -m %s -a "%s" all -l "%s"' % (bin_path, inv_opts, base_opts,
context.CLIARGS['module_name'],
repo_opts, limit_opts)
for ev in context.CLIARGS['extra_vars']:
cmd += ' -e %s' % shlex_quote(ev)
# Nap?
if context.CLIARGS['sleep']:
display.display("Sleeping for %d seconds..." % context.CLIARGS['sleep'])
time.sleep(context.CLIARGS['sleep'])
# RUN the Checkout command
display.debug("running ansible with VCS module to checkout repo")
display.vvvv('EXEC: %s' % cmd)
rc, b_out, b_err = run_cmd(cmd, live=True)
if rc != 0:
if context.CLIARGS['force']:
display.warning("Unable to update repository. Continuing with (forced) run of playbook.")
else:
return rc
elif context.CLIARGS['ifchanged'] and b'"changed": true' not in b_out:
display.display("Repository has not changed, quitting.")
return 0
playbook = self.select_playbook(context.CLIARGS['dest'])
if playbook is None:
raise AnsibleOptionsError("Could not find a playbook to run.")
# Build playbook command
cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook)
if context.CLIARGS['vault_password_files']:
for vault_password_file in context.CLIARGS['vault_password_files']:
cmd += " --vault-password-file=%s" % vault_password_file
if context.CLIARGS['vault_ids']:
for vault_id in context.CLIARGS['vault_ids']:
cmd += " --vault-id=%s" % vault_id
for ev in context.CLIARGS['extra_vars']:
cmd += ' -e %s' % shlex_quote(ev)
if context.CLIARGS['become_ask_pass']:
cmd += ' --ask-become-pass'
if context.CLIARGS['skip_tags']:
cmd += ' --skip-tags "%s"' % to_native(u','.join(context.CLIARGS['skip_tags']))
if context.CLIARGS['tags']:
cmd += ' -t "%s"' % to_native(u','.join(context.CLIARGS['tags']))
if context.CLIARGS['subset']:
cmd += ' -l "%s"' % context.CLIARGS['subset']
else:
cmd += ' -l "%s"' % limit_opts
if context.CLIARGS['check']:
cmd += ' -C'
if context.CLIARGS['diff']:
cmd += ' -D'
os.chdir(context.CLIARGS['dest'])
# redo inventory options as new files might exist now
inv_opts = self._get_inv_cli()
if inv_opts:
cmd += inv_opts
# RUN THE PLAYBOOK COMMAND
display.debug("running ansible-playbook to do actual work")
display.debug('EXEC: %s' % cmd)
rc, b_out, b_err = run_cmd(cmd, live=True)
if context.CLIARGS['purge']:
os.chdir('/')
try:
shutil.rmtree(context.CLIARGS['dest'])
except Exception as e:
display.error(u"Failed to remove %s: %s" % (context.CLIARGS['dest'], to_text(e)))
return rc
@staticmethod
def try_playbook(path):
if not os.path.exists(path):
return 1
if not os.access(path, os.R_OK):
return 2
return 0
@staticmethod
def select_playbook(path):
playbook = None
if context.CLIARGS['args'] and context.CLIARGS['args'][0] is not None:
playbook = os.path.join(path, context.CLIARGS['args'][0])
rc = PullCLI.try_playbook(playbook)
if rc != 0:
display.warning("%s: %s" % (playbook, PullCLI.PLAYBOOK_ERRORS[rc]))
return None
return playbook
else:
fqdn = socket.getfqdn()
hostpb = os.path.join(path, fqdn + '.yml')
shorthostpb = os.path.join(path, fqdn.split('.')[0] + '.yml')
localpb = os.path.join(path, PullCLI.DEFAULT_PLAYBOOK)
errors = []
for pb in [hostpb, shorthostpb, localpb]:
rc = PullCLI.try_playbook(pb)
if rc == 0:
playbook = pb
break
else:
errors.append("%s: %s" % (pb, PullCLI.PLAYBOOK_ERRORS[rc]))
if playbook is None:
display.warning("\n".join(errors))
return playbook
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,708 |
ansible-pull cannot run multiple playbooks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When providing `ansible-pull` with multiple playbooks (e.g. `ansible-pull [...] playbook1.yml playbook2.yml`), only the first playbook is run.
This is contrary to what the documentation states is possible:
* https://docs.ansible.com/ansible/2.10/cli/ansible-pull.html - the usage block in the synopsis section shows multiple playbooks (`[playbook.yml [playbook.yml ...]]`), just like `ansible-playbook`.
The `argparse` options indicate support for multiple playbooks, and call the arguments `Playbook(s)` - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L86
However, the code only ever seems to handle a single playbook - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L314-L320
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-pull
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.13
config file = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Aug 27 2020, 21:23:25) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = True
DEFAULT_FORKS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 25
DEFAULT_HOST_LIST(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = [u'/root/.ansible/pull/ip-x-x-x-x.ec2.internal/hosts.ini']
DEFAULT_LOG_PATH(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.log
DEFAULT_POLL_INTERVAL(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 5
DEFAULT_STRATEGY(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = linear
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Amazon Linux 2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-pull` with multiple playbooks, for example:
```bash
ansible-pull --url [email protected]:NFarrington/demo-ansible-pull-multiple-playbooks.git --inventory 127.0.0.1, playbook1.yml playbook2.yml
```
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1:
```yaml
---
- name: Playbook1
hosts: all
tasks:
- name: playbook1 debug message
debug:
msg: Hello from playbook1
```
Playbook 2:
```yaml
---
- name: Playbook2
hosts: all
tasks:
- name: playbook2 debug message
debug:
msg: Hello from playbook2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The same behaviour as `ansible-playbook` is expected:
```
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PLAY [Playbook2] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook2 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook2"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Only the first playbook is executed:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72708
|
https://github.com/ansible/ansible/pull/73172
|
0279d0298062956a220c524f51e1bc0b2db7feb1
|
4add72310764d1f64a6a60eef89c72736f1528c5
| 2020-11-23T00:43:59Z |
python
| 2021-03-17T17:52:51Z |
test/integration/targets/pull/pull-integration-test/multi_play_1.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,708 |
ansible-pull cannot run multiple playbooks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When providing `ansible-pull` with multiple playbooks (e.g. `ansible-pull [...] playbook1.yml playbook2.yml`), only the first playbook is run.
This is contrary to what the documentation states is possible:
* https://docs.ansible.com/ansible/2.10/cli/ansible-pull.html - the usage block in the synopsis section shows multiple playbooks (`[playbook.yml [playbook.yml ...]]`), just like `ansible-playbook`.
The `argparse` options indicate support for multiple playbooks, and call the arguments `Playbook(s)` - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L86
However, the code only ever seems to handle a single playbook - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L314-L320
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-pull
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.13
config file = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Aug 27 2020, 21:23:25) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = True
DEFAULT_FORKS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 25
DEFAULT_HOST_LIST(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = [u'/root/.ansible/pull/ip-x-x-x-x.ec2.internal/hosts.ini']
DEFAULT_LOG_PATH(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.log
DEFAULT_POLL_INTERVAL(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 5
DEFAULT_STRATEGY(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = linear
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Amazon Linux 2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-pull` with multiple playbooks, for example:
```bash
ansible-pull --url [email protected]:NFarrington/demo-ansible-pull-multiple-playbooks.git --inventory 127.0.0.1, playbook1.yml playbook2.yml
```
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1:
```yaml
---
- name: Playbook1
hosts: all
tasks:
- name: playbook1 debug message
debug:
msg: Hello from playbook1
```
Playbook 2:
```yaml
---
- name: Playbook2
hosts: all
tasks:
- name: playbook2 debug message
debug:
msg: Hello from playbook2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The same behaviour as `ansible-playbook` is expected:
```
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PLAY [Playbook2] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook2 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook2"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Only the first playbook is executed:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72708
|
https://github.com/ansible/ansible/pull/73172
|
0279d0298062956a220c524f51e1bc0b2db7feb1
|
4add72310764d1f64a6a60eef89c72736f1528c5
| 2020-11-23T00:43:59Z |
python
| 2021-03-17T17:52:51Z |
test/integration/targets/pull/pull-integration-test/multi_play_2.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,708 |
ansible-pull cannot run multiple playbooks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When providing `ansible-pull` with multiple playbooks (e.g. `ansible-pull [...] playbook1.yml playbook2.yml`), only the first playbook is run.
This is contrary to what the documentation states is possible:
* https://docs.ansible.com/ansible/2.10/cli/ansible-pull.html - the usage block in the synopsis section shows multiple playbooks (`[playbook.yml [playbook.yml ...]]`), just like `ansible-playbook`.
The `argparse` options indicate support for multiple playbooks, and call the arguments `Playbook(s)` - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L86
However, the code only ever seems to handle a single playbook - https://github.com/ansible/ansible/blob/v2.9.13/lib/ansible/cli/pull.py#L314-L320
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-pull
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.13
config file = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Aug 27 2020, 21:23:25) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_FORCE_HANDLERS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = True
DEFAULT_FORKS(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 25
DEFAULT_HOST_LIST(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = [u'/root/.ansible/pull/ip-x-x-x-x.ec2.internal/hosts.ini']
DEFAULT_LOG_PATH(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = /root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.log
DEFAULT_POLL_INTERVAL(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = 5
DEFAULT_STRATEGY(/root/.ansible/pull/ip-x-x-x-x.ec2.internal/ansible.cfg) = linear
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Amazon Linux 2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-pull` with multiple playbooks, for example:
```bash
ansible-pull --url [email protected]:NFarrington/demo-ansible-pull-multiple-playbooks.git --inventory 127.0.0.1, playbook1.yml playbook2.yml
```
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1:
```yaml
---
- name: Playbook1
hosts: all
tasks:
- name: playbook1 debug message
debug:
msg: Hello from playbook1
```
Playbook 2:
```yaml
---
- name: Playbook2
hosts: all
tasks:
- name: playbook2 debug message
debug:
msg: Hello from playbook2
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The same behaviour as `ansible-playbook` is expected:
```
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PLAY [Playbook2] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook2 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook2"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Only the first playbook is executed:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Playbook1] *************************************************************************************
TASK [Gathering Facts] *******************************************************************************
ok: [127.0.0.1]
TASK [playbook1 debug message] ***********************************************************************
ok: [127.0.0.1] => {
"msg": "Hello from playbook1"
}
PLAY RECAP *******************************************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72708
|
https://github.com/ansible/ansible/pull/73172
|
0279d0298062956a220c524f51e1bc0b2db7feb1
|
4add72310764d1f64a6a60eef89c72736f1528c5
| 2020-11-23T00:43:59Z |
python
| 2021-03-17T17:52:51Z |
test/integration/targets/pull/runme.sh
|
#!/usr/bin/env bash
set -eux
set -o pipefail
# http://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x
temp_dir=$(shell mktemp -d 2>/dev/null || mktemp -d -t 'ansible-testing-XXXXXXXXXX')
trap 'rm -rf "${temp_dir}"' EXIT
repo_dir="${temp_dir}/repo"
pull_dir="${temp_dir}/pull"
temp_log="${temp_dir}/pull.log"
ansible-playbook setup.yml -i ../../inventory
cleanup="$(pwd)/cleanup.yml"
trap 'ansible-playbook "${cleanup}" -i ../../inventory' EXIT
cp -av "pull-integration-test" "${repo_dir}"
cd "${repo_dir}"
(
git init
git config user.email "[email protected]"
git config user.name "Ansible Test Runner"
git add .
git commit -m "Initial commit."
)
function pass_tests {
# test for https://github.com/ansible/ansible/issues/13688
if ! grep MAGICKEYWORD "${temp_log}"; then
cat "${temp_log}"
echo "Missing MAGICKEYWORD in output."
exit 1
fi
# test for https://github.com/ansible/ansible/issues/13681
if grep -E '127\.0\.0\.1.*ok' "${temp_log}"; then
cat "${temp_log}"
echo "Found host 127.0.0.1 in output. Only localhost should be present."
exit 1
fi
# make sure one host was run
if ! grep -E 'localhost.*ok' "${temp_log}"; then
cat "${temp_log}"
echo "Did not find host localhost in output."
exit 1
fi
}
export ANSIBLE_INVENTORY
export ANSIBLE_HOST_PATTERN_MISMATCH
unset ANSIBLE_INVENTORY
unset ANSIBLE_HOST_PATTERN_MISMATCH
ANSIBLE_CONFIG='' ansible-pull -d "${pull_dir}" -U "${repo_dir}" "$@" | tee "${temp_log}"
pass_tests
# ensure complex extra vars work
PASSWORD='test'
USER=${USER:-'broken_docker'}
JSON_EXTRA_ARGS='{"docker_registries_login": [{ "docker_password": "'"${PASSWORD}"'", "docker_username": "'"${USER}"'", "docker_registry_url":"repository-manager.company.com:5001"}], "docker_registries_logout": [{ "docker_password": "'"${PASSWORD}"'", "docker_username": "'"${USER}"'", "docker_registry_url":"repository-manager.company.com:5001"}] }'
ANSIBLE_CONFIG='' ansible-pull -d "${pull_dir}" -U "${repo_dir}" -e "${JSON_EXTRA_ARGS}" "$@" --tags untagged,test_ev | tee "${temp_log}"
pass_tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,899 |
v2_runner_retry callbacks do not fire until next batch of hosts is started
|
### Summary
Executing a play with N hosts, when N > batch size, causes v2_runner_retry callbacks to be delayed until the following batch begins. The callbacks all fire sat the same time, not as individual retry results are delivered by TaskExecutor(). I've observed this on Ansible 2.8, 2.9. 2.10, 3.0 and current devel. Across CPython 2.7, 3.6, 3.8, 3.9 and 3.10 alpha.
### Issue Type
Bug Report
### Component Name
ansible.plugins.strategy.linear
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
config file = /home/alex/.ansible.cfg
configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alex/src/ansible/lib/ansible
ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alex/src/ansible/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed | cat
```
### OS / Environment
Ubuntu 20.04 x86_64, Python 3.8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: locals
gather_facts: false
connection: local
vars:
ansible_python_interpreter: python3
tasks:
- name: Command
command: "true"
retries: 3
delay: 2
register: result
until: result.attempts == 3
changed_when: false
```
```yaml
locals:
hosts:
local[1:8]:
vars:
connection: local
```
### Expected Results
First v2_runner_retry arrives after 2 seconds (configured `delay`), next 2 seconds after that ...
### Actual Results
All v2_runner_retry events from the first batch of hosts arrive at t=5s, the same moment that v2_runner_on_ok arrives.
```console
Β± ansible-playbook -i inventory.yml playbook.yml | ts -s
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 ok: [local1]
00:00:05 ok: [local4]
00:00:05 ok: [local3]
00:00:05 ok: [local2]
00:00:05 ok: [local5]
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:10 ok: [local6]
00:00:10 ok: [local7]
00:00:10 ok: [local8]
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
<details>
<summary>Full -vvvv output (click to expand)</summary>
```console
Β± ansible-playbook -i inventory.yml playbook.yml -vvvv | ts -s
00:00:00 ansible-playbook [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
00:00:00 config file = /home/alex/.ansible.cfg
00:00:00 configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
00:00:00 ansible python module location = /home/alex/src/ansible/lib/ansible
00:00:00 ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
00:00:00 executable location = /home/alex/src/ansible/bin/ansible-playbook
00:00:00 python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
00:00:00 jinja version = 2.11.3
00:00:00 libyaml = True
00:00:00 Using /home/alex/.ansible.cfg as config file
00:00:00 setting up inventory plugins
00:00:00 host_list declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 script declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 Parsed /home/alex/src/ansible/inventory.yml inventory source with yaml plugin
00:00:00 Loading callback plugin default of type stdout, v2.0 from /home/alex/src/ansible/lib/ansible/plugins/callback/default.py
00:00:00 Skipping callback 'default', as we already have a stdout callback.
00:00:00 Skipping callback 'minimal', as we already have a stdout callback.
00:00:00 Skipping callback 'oneline', as we already have a stdout callback.
00:00:00
00:00:00 PLAYBOOK: playbook.yml *********************************************************
00:00:00 Positional arguments: playbook.yml
00:00:00 verbosity: 4
00:00:00 connection: smart
00:00:00 timeout: 10
00:00:00 become_method: sudo
00:00:00 tags: ('all',)
00:00:00 inventory: ('/home/alex/src/ansible/inventory.yml',)
00:00:00 forks: 5
00:00:00 1 plays in playbook.yml
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00 META: ran handlers
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:00 task path: /home/alex/src/ansible/playbook.yml:8
00:00:00 <local1> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" && echo ansible-tmp-1615758721.8470132-623938-280690236739822="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" ) && sleep 0'
00:00:00 <local2> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local3> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" && echo ansible-tmp-1615758721.8523452-623939-208256383170342="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" ) && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" && echo ansible-tmp-1615758721.8563304-623943-272057805295611="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" ) && sleep 0'
00:00:00 <local4> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local5> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" && echo ansible-tmp-1615758721.8669798-623952-36636581744080="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" ) && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" && echo ansible-tmp-1615758721.8724117-623970-133458959317692="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" ) && sleep 0'
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpx3b9gtjo TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpng_lhkvs TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py
00:00:00 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1jhr8m0b TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py
00:00:00 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpq0kp8apk TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py
00:00:00 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpui9244cx TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py
00:00:00 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:01 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" && echo ansible-tmp-1615758724.133097-623943-134225245942013="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" ) && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" && echo ansible-tmp-1615758724.1334202-623939-32486568572513="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" ) && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" && echo ansible-tmp-1615758724.1342025-623952-225237159473494="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpxczs__uw TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py
00:00:03 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprld83jd8 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py
00:00:03 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpllh03pi7 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py
00:00:03 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" && echo ansible-tmp-1615758724.182092-623938-153700129389411="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" ) && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" && echo ansible-tmp-1615758724.1917582-623970-217763781705178="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpwrq_y32t TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py
00:00:03 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpd9rqeqzs TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py
00:00:03 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" && echo ansible-tmp-1615758726.323465-623939-61008003621447="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmputs2x9ix TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py
00:00:05 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" && echo ansible-tmp-1615758726.3784077-623970-207559000874144="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" && echo ansible-tmp-1615758726.37912-623943-197158363936065="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" ) && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" && echo ansible-tmp-1615758726.3797214-623938-273756783496564="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" ) && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" && echo ansible-tmp-1615758726.3799593-623952-179250791919717="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpz3zy9vq5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py
00:00:05 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpznwnj_r2 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py
00:00:05 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp6kxublz5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py
00:00:05 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1ck6sguq TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py
00:00:05 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002349",
00:00:05 "end": "2021-03-14 21:52:02.101688",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.099339",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002415",
00:00:05 "end": "2021-03-14 21:52:02.101400",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.098985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002612",
00:00:05 "end": "2021-03-14 21:52:02.100329",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.097717",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002716",
00:00:05 "end": "2021-03-14 21:52:02.151676",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.148960",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002320",
00:00:05 "end": "2021-03-14 21:52:02.156925",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.154605",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002502",
00:00:05 "end": "2021-03-14 21:52:04.291273",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.288771",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:04.345135",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342722",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003319",
00:00:05 "end": "2021-03-14 21:52:04.346062",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342743",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002964",
00:00:05 "end": "2021-03-14 21:52:04.344021",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003045",
00:00:05 "end": "2021-03-14 21:52:04.344102",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 ok: [local2] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:06.491124",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.488711",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" && echo ansible-tmp-1615758726.52976-624299-143253575445545="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprsq3rk5b TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py
00:00:05 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ > /dev/null 2>&1 && sleep 0'
00:00:05 ok: [local4] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002178",
00:00:05 "end": "2021-03-14 21:52:06.546979",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.544801",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 ok: [local5] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003272",
00:00:05 "end": "2021-03-14 21:52:06.559002",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.555730",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" && echo ansible-tmp-1615758726.5958605-624328-160875916588848="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" ) && sleep 0'
00:00:05 <local8> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" && echo ansible-tmp-1615758726.6032434-624333-29578010259332="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpcktq_nij TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py
00:00:05 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1gdxvsr9 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py
00:00:05 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local3] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002869",
00:00:05 "end": "2021-03-14 21:52:06.593703",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.590834",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local1] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003459",
00:00:05 "end": "2021-03-14 21:52:06.599444",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.595985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002021",
00:00:05 "end": "2021-03-14 21:52:06.717127",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.715106",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002013",
00:00:05 "end": "2021-03-14 21:52:06.744519",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.742506",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.001962",
00:00:05 "end": "2021-03-14 21:52:06.780870",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.778908",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:07 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" && echo ansible-tmp-1615758728.7470558-624299-88066478691530="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmptry4pngp TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py
00:00:07 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" && echo ansible-tmp-1615758728.7725587-624328-734150765160="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" ) && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpvmbdiifq TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py
00:00:07 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" && echo ansible-tmp-1615758728.8074439-624333-208836372401478="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp38xldgse TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py
00:00:07 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ > /dev/null 2>&1 && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002026",
00:00:07 "end": "2021-03-14 21:52:08.912963",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.910937",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002287",
00:00:07 "end": "2021-03-14 21:52:08.917332",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.915045",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002045",
00:00:07 "end": "2021-03-14 21:52:08.958693",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.956648",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:09 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" && echo ansible-tmp-1615758730.9422395-624299-194978499025300="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" ) && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" && echo ansible-tmp-1615758730.945759-624328-198677189881810="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsppl_8rz TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py
00:00:09 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpo3yr1mxu TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py
00:00:09 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" && echo ansible-tmp-1615758730.9828703-624333-60079952062812="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsql_f_ro TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py
00:00:09 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:10 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local6] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002250",
00:00:10 "end": "2021-03-14 21:52:11.096291",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.094041",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local7] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.001954",
00:00:10 "end": "2021-03-14 21:52:11.117494",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.115540",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local8] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002129",
00:00:10 "end": "2021-03-14 21:52:11.146187",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.144058",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 META: ran handlers
00:00:10 META: ran handlers
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
|
https://github.com/ansible/ansible/issues/73899
|
https://github.com/ansible/ansible/pull/73927
|
561cdf3ace593a0d0285cc3e36baaf807238c023
|
78f34786dd468c42d7a222468685590207e74679
| 2021-03-14T21:56:53Z |
python
| 2021-03-18T19:12:29Z |
changelogs/fragments/73899-more-te-callbacks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,899 |
v2_runner_retry callbacks do not fire until next batch of hosts is started
|
### Summary
Executing a play with N hosts, when N > batch size, causes v2_runner_retry callbacks to be delayed until the following batch begins. The callbacks all fire sat the same time, not as individual retry results are delivered by TaskExecutor(). I've observed this on Ansible 2.8, 2.9. 2.10, 3.0 and current devel. Across CPython 2.7, 3.6, 3.8, 3.9 and 3.10 alpha.
### Issue Type
Bug Report
### Component Name
ansible.plugins.strategy.linear
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
config file = /home/alex/.ansible.cfg
configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alex/src/ansible/lib/ansible
ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alex/src/ansible/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed | cat
```
### OS / Environment
Ubuntu 20.04 x86_64, Python 3.8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: locals
gather_facts: false
connection: local
vars:
ansible_python_interpreter: python3
tasks:
- name: Command
command: "true"
retries: 3
delay: 2
register: result
until: result.attempts == 3
changed_when: false
```
```yaml
locals:
hosts:
local[1:8]:
vars:
connection: local
```
### Expected Results
First v2_runner_retry arrives after 2 seconds (configured `delay`), next 2 seconds after that ...
### Actual Results
All v2_runner_retry events from the first batch of hosts arrive at t=5s, the same moment that v2_runner_on_ok arrives.
```console
Β± ansible-playbook -i inventory.yml playbook.yml | ts -s
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 ok: [local1]
00:00:05 ok: [local4]
00:00:05 ok: [local3]
00:00:05 ok: [local2]
00:00:05 ok: [local5]
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:10 ok: [local6]
00:00:10 ok: [local7]
00:00:10 ok: [local8]
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
<details>
<summary>Full -vvvv output (click to expand)</summary>
```console
Β± ansible-playbook -i inventory.yml playbook.yml -vvvv | ts -s
00:00:00 ansible-playbook [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
00:00:00 config file = /home/alex/.ansible.cfg
00:00:00 configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
00:00:00 ansible python module location = /home/alex/src/ansible/lib/ansible
00:00:00 ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
00:00:00 executable location = /home/alex/src/ansible/bin/ansible-playbook
00:00:00 python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
00:00:00 jinja version = 2.11.3
00:00:00 libyaml = True
00:00:00 Using /home/alex/.ansible.cfg as config file
00:00:00 setting up inventory plugins
00:00:00 host_list declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 script declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 Parsed /home/alex/src/ansible/inventory.yml inventory source with yaml plugin
00:00:00 Loading callback plugin default of type stdout, v2.0 from /home/alex/src/ansible/lib/ansible/plugins/callback/default.py
00:00:00 Skipping callback 'default', as we already have a stdout callback.
00:00:00 Skipping callback 'minimal', as we already have a stdout callback.
00:00:00 Skipping callback 'oneline', as we already have a stdout callback.
00:00:00
00:00:00 PLAYBOOK: playbook.yml *********************************************************
00:00:00 Positional arguments: playbook.yml
00:00:00 verbosity: 4
00:00:00 connection: smart
00:00:00 timeout: 10
00:00:00 become_method: sudo
00:00:00 tags: ('all',)
00:00:00 inventory: ('/home/alex/src/ansible/inventory.yml',)
00:00:00 forks: 5
00:00:00 1 plays in playbook.yml
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00 META: ran handlers
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:00 task path: /home/alex/src/ansible/playbook.yml:8
00:00:00 <local1> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" && echo ansible-tmp-1615758721.8470132-623938-280690236739822="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" ) && sleep 0'
00:00:00 <local2> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local3> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" && echo ansible-tmp-1615758721.8523452-623939-208256383170342="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" ) && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" && echo ansible-tmp-1615758721.8563304-623943-272057805295611="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" ) && sleep 0'
00:00:00 <local4> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local5> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" && echo ansible-tmp-1615758721.8669798-623952-36636581744080="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" ) && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" && echo ansible-tmp-1615758721.8724117-623970-133458959317692="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" ) && sleep 0'
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpx3b9gtjo TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpng_lhkvs TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py
00:00:00 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1jhr8m0b TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py
00:00:00 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpq0kp8apk TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py
00:00:00 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpui9244cx TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py
00:00:00 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:01 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" && echo ansible-tmp-1615758724.133097-623943-134225245942013="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" ) && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" && echo ansible-tmp-1615758724.1334202-623939-32486568572513="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" ) && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" && echo ansible-tmp-1615758724.1342025-623952-225237159473494="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpxczs__uw TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py
00:00:03 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprld83jd8 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py
00:00:03 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpllh03pi7 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py
00:00:03 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" && echo ansible-tmp-1615758724.182092-623938-153700129389411="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" ) && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" && echo ansible-tmp-1615758724.1917582-623970-217763781705178="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpwrq_y32t TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py
00:00:03 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpd9rqeqzs TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py
00:00:03 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" && echo ansible-tmp-1615758726.323465-623939-61008003621447="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmputs2x9ix TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py
00:00:05 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" && echo ansible-tmp-1615758726.3784077-623970-207559000874144="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" && echo ansible-tmp-1615758726.37912-623943-197158363936065="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" ) && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" && echo ansible-tmp-1615758726.3797214-623938-273756783496564="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" ) && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" && echo ansible-tmp-1615758726.3799593-623952-179250791919717="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpz3zy9vq5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py
00:00:05 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpznwnj_r2 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py
00:00:05 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp6kxublz5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py
00:00:05 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1ck6sguq TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py
00:00:05 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002349",
00:00:05 "end": "2021-03-14 21:52:02.101688",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.099339",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002415",
00:00:05 "end": "2021-03-14 21:52:02.101400",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.098985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002612",
00:00:05 "end": "2021-03-14 21:52:02.100329",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.097717",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002716",
00:00:05 "end": "2021-03-14 21:52:02.151676",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.148960",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002320",
00:00:05 "end": "2021-03-14 21:52:02.156925",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.154605",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002502",
00:00:05 "end": "2021-03-14 21:52:04.291273",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.288771",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:04.345135",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342722",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003319",
00:00:05 "end": "2021-03-14 21:52:04.346062",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342743",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002964",
00:00:05 "end": "2021-03-14 21:52:04.344021",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003045",
00:00:05 "end": "2021-03-14 21:52:04.344102",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 ok: [local2] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:06.491124",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.488711",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" && echo ansible-tmp-1615758726.52976-624299-143253575445545="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprsq3rk5b TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py
00:00:05 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ > /dev/null 2>&1 && sleep 0'
00:00:05 ok: [local4] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002178",
00:00:05 "end": "2021-03-14 21:52:06.546979",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.544801",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 ok: [local5] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003272",
00:00:05 "end": "2021-03-14 21:52:06.559002",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.555730",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" && echo ansible-tmp-1615758726.5958605-624328-160875916588848="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" ) && sleep 0'
00:00:05 <local8> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" && echo ansible-tmp-1615758726.6032434-624333-29578010259332="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpcktq_nij TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py
00:00:05 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1gdxvsr9 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py
00:00:05 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local3] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002869",
00:00:05 "end": "2021-03-14 21:52:06.593703",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.590834",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local1] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003459",
00:00:05 "end": "2021-03-14 21:52:06.599444",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.595985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002021",
00:00:05 "end": "2021-03-14 21:52:06.717127",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.715106",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002013",
00:00:05 "end": "2021-03-14 21:52:06.744519",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.742506",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.001962",
00:00:05 "end": "2021-03-14 21:52:06.780870",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.778908",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:07 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" && echo ansible-tmp-1615758728.7470558-624299-88066478691530="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmptry4pngp TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py
00:00:07 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" && echo ansible-tmp-1615758728.7725587-624328-734150765160="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" ) && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpvmbdiifq TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py
00:00:07 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" && echo ansible-tmp-1615758728.8074439-624333-208836372401478="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp38xldgse TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py
00:00:07 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ > /dev/null 2>&1 && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002026",
00:00:07 "end": "2021-03-14 21:52:08.912963",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.910937",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002287",
00:00:07 "end": "2021-03-14 21:52:08.917332",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.915045",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002045",
00:00:07 "end": "2021-03-14 21:52:08.958693",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.956648",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:09 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" && echo ansible-tmp-1615758730.9422395-624299-194978499025300="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" ) && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" && echo ansible-tmp-1615758730.945759-624328-198677189881810="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsppl_8rz TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py
00:00:09 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpo3yr1mxu TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py
00:00:09 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" && echo ansible-tmp-1615758730.9828703-624333-60079952062812="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsql_f_ro TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py
00:00:09 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:10 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local6] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002250",
00:00:10 "end": "2021-03-14 21:52:11.096291",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.094041",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local7] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.001954",
00:00:10 "end": "2021-03-14 21:52:11.117494",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.115540",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local8] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002129",
00:00:10 "end": "2021-03-14 21:52:11.146187",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.144058",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 META: ran handlers
00:00:10 META: ran handlers
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
|
https://github.com/ansible/ansible/issues/73899
|
https://github.com/ansible/ansible/pull/73927
|
561cdf3ace593a0d0285cc3e36baaf807238c023
|
78f34786dd468c42d7a222468685590207e74679
| 2021-03-14T21:56:53Z |
python
| 2021-03-18T19:12:29Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.send_task_result(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
try:
# TODO: remove play_context as this does not take delegation into account, task itself should hold values
# for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
else:
# just use normal host vars
cvars = orig_vars = variables
templar.available_variables = cvars
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
templar.available_variables = orig_vars
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
self._task.action, self._task.args, self._task.module_defaults, templar, self._task._ansible_internal_redirect_list
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_task_result(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host,
async_task,
async_result,
task_fields=self._task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task().load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
self._play_context.connection = templar.template(cvars['ansible_connection'])
else:
self._play_context.connection = self._task.connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,899 |
v2_runner_retry callbacks do not fire until next batch of hosts is started
|
### Summary
Executing a play with N hosts, when N > batch size, causes v2_runner_retry callbacks to be delayed until the following batch begins. The callbacks all fire sat the same time, not as individual retry results are delivered by TaskExecutor(). I've observed this on Ansible 2.8, 2.9. 2.10, 3.0 and current devel. Across CPython 2.7, 3.6, 3.8, 3.9 and 3.10 alpha.
### Issue Type
Bug Report
### Component Name
ansible.plugins.strategy.linear
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
config file = /home/alex/.ansible.cfg
configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alex/src/ansible/lib/ansible
ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alex/src/ansible/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed | cat
```
### OS / Environment
Ubuntu 20.04 x86_64, Python 3.8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: locals
gather_facts: false
connection: local
vars:
ansible_python_interpreter: python3
tasks:
- name: Command
command: "true"
retries: 3
delay: 2
register: result
until: result.attempts == 3
changed_when: false
```
```yaml
locals:
hosts:
local[1:8]:
vars:
connection: local
```
### Expected Results
First v2_runner_retry arrives after 2 seconds (configured `delay`), next 2 seconds after that ...
### Actual Results
All v2_runner_retry events from the first batch of hosts arrive at t=5s, the same moment that v2_runner_on_ok arrives.
```console
Β± ansible-playbook -i inventory.yml playbook.yml | ts -s
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 ok: [local1]
00:00:05 ok: [local4]
00:00:05 ok: [local3]
00:00:05 ok: [local2]
00:00:05 ok: [local5]
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:10 ok: [local6]
00:00:10 ok: [local7]
00:00:10 ok: [local8]
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
<details>
<summary>Full -vvvv output (click to expand)</summary>
```console
Β± ansible-playbook -i inventory.yml playbook.yml -vvvv | ts -s
00:00:00 ansible-playbook [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
00:00:00 config file = /home/alex/.ansible.cfg
00:00:00 configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
00:00:00 ansible python module location = /home/alex/src/ansible/lib/ansible
00:00:00 ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
00:00:00 executable location = /home/alex/src/ansible/bin/ansible-playbook
00:00:00 python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
00:00:00 jinja version = 2.11.3
00:00:00 libyaml = True
00:00:00 Using /home/alex/.ansible.cfg as config file
00:00:00 setting up inventory plugins
00:00:00 host_list declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 script declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 Parsed /home/alex/src/ansible/inventory.yml inventory source with yaml plugin
00:00:00 Loading callback plugin default of type stdout, v2.0 from /home/alex/src/ansible/lib/ansible/plugins/callback/default.py
00:00:00 Skipping callback 'default', as we already have a stdout callback.
00:00:00 Skipping callback 'minimal', as we already have a stdout callback.
00:00:00 Skipping callback 'oneline', as we already have a stdout callback.
00:00:00
00:00:00 PLAYBOOK: playbook.yml *********************************************************
00:00:00 Positional arguments: playbook.yml
00:00:00 verbosity: 4
00:00:00 connection: smart
00:00:00 timeout: 10
00:00:00 become_method: sudo
00:00:00 tags: ('all',)
00:00:00 inventory: ('/home/alex/src/ansible/inventory.yml',)
00:00:00 forks: 5
00:00:00 1 plays in playbook.yml
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00 META: ran handlers
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:00 task path: /home/alex/src/ansible/playbook.yml:8
00:00:00 <local1> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" && echo ansible-tmp-1615758721.8470132-623938-280690236739822="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" ) && sleep 0'
00:00:00 <local2> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local3> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" && echo ansible-tmp-1615758721.8523452-623939-208256383170342="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" ) && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" && echo ansible-tmp-1615758721.8563304-623943-272057805295611="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" ) && sleep 0'
00:00:00 <local4> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local5> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" && echo ansible-tmp-1615758721.8669798-623952-36636581744080="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" ) && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" && echo ansible-tmp-1615758721.8724117-623970-133458959317692="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" ) && sleep 0'
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpx3b9gtjo TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpng_lhkvs TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py
00:00:00 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1jhr8m0b TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py
00:00:00 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpq0kp8apk TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py
00:00:00 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpui9244cx TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py
00:00:00 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:01 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" && echo ansible-tmp-1615758724.133097-623943-134225245942013="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" ) && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" && echo ansible-tmp-1615758724.1334202-623939-32486568572513="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" ) && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" && echo ansible-tmp-1615758724.1342025-623952-225237159473494="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpxczs__uw TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py
00:00:03 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprld83jd8 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py
00:00:03 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpllh03pi7 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py
00:00:03 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" && echo ansible-tmp-1615758724.182092-623938-153700129389411="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" ) && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" && echo ansible-tmp-1615758724.1917582-623970-217763781705178="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpwrq_y32t TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py
00:00:03 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpd9rqeqzs TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py
00:00:03 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" && echo ansible-tmp-1615758726.323465-623939-61008003621447="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmputs2x9ix TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py
00:00:05 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" && echo ansible-tmp-1615758726.3784077-623970-207559000874144="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" && echo ansible-tmp-1615758726.37912-623943-197158363936065="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" ) && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" && echo ansible-tmp-1615758726.3797214-623938-273756783496564="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" ) && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" && echo ansible-tmp-1615758726.3799593-623952-179250791919717="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpz3zy9vq5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py
00:00:05 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpznwnj_r2 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py
00:00:05 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp6kxublz5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py
00:00:05 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1ck6sguq TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py
00:00:05 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002349",
00:00:05 "end": "2021-03-14 21:52:02.101688",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.099339",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002415",
00:00:05 "end": "2021-03-14 21:52:02.101400",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.098985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002612",
00:00:05 "end": "2021-03-14 21:52:02.100329",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.097717",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002716",
00:00:05 "end": "2021-03-14 21:52:02.151676",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.148960",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002320",
00:00:05 "end": "2021-03-14 21:52:02.156925",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.154605",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002502",
00:00:05 "end": "2021-03-14 21:52:04.291273",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.288771",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:04.345135",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342722",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003319",
00:00:05 "end": "2021-03-14 21:52:04.346062",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342743",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002964",
00:00:05 "end": "2021-03-14 21:52:04.344021",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003045",
00:00:05 "end": "2021-03-14 21:52:04.344102",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 ok: [local2] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:06.491124",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.488711",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" && echo ansible-tmp-1615758726.52976-624299-143253575445545="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprsq3rk5b TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py
00:00:05 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ > /dev/null 2>&1 && sleep 0'
00:00:05 ok: [local4] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002178",
00:00:05 "end": "2021-03-14 21:52:06.546979",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.544801",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 ok: [local5] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003272",
00:00:05 "end": "2021-03-14 21:52:06.559002",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.555730",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" && echo ansible-tmp-1615758726.5958605-624328-160875916588848="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" ) && sleep 0'
00:00:05 <local8> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" && echo ansible-tmp-1615758726.6032434-624333-29578010259332="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpcktq_nij TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py
00:00:05 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1gdxvsr9 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py
00:00:05 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local3] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002869",
00:00:05 "end": "2021-03-14 21:52:06.593703",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.590834",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local1] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003459",
00:00:05 "end": "2021-03-14 21:52:06.599444",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.595985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002021",
00:00:05 "end": "2021-03-14 21:52:06.717127",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.715106",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002013",
00:00:05 "end": "2021-03-14 21:52:06.744519",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.742506",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.001962",
00:00:05 "end": "2021-03-14 21:52:06.780870",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.778908",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:07 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" && echo ansible-tmp-1615758728.7470558-624299-88066478691530="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmptry4pngp TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py
00:00:07 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" && echo ansible-tmp-1615758728.7725587-624328-734150765160="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" ) && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpvmbdiifq TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py
00:00:07 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" && echo ansible-tmp-1615758728.8074439-624333-208836372401478="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp38xldgse TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py
00:00:07 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ > /dev/null 2>&1 && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002026",
00:00:07 "end": "2021-03-14 21:52:08.912963",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.910937",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002287",
00:00:07 "end": "2021-03-14 21:52:08.917332",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.915045",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002045",
00:00:07 "end": "2021-03-14 21:52:08.958693",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.956648",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:09 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" && echo ansible-tmp-1615758730.9422395-624299-194978499025300="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" ) && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" && echo ansible-tmp-1615758730.945759-624328-198677189881810="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsppl_8rz TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py
00:00:09 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpo3yr1mxu TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py
00:00:09 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" && echo ansible-tmp-1615758730.9828703-624333-60079952062812="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsql_f_ro TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py
00:00:09 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:10 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local6] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002250",
00:00:10 "end": "2021-03-14 21:52:11.096291",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.094041",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local7] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.001954",
00:00:10 "end": "2021-03-14 21:52:11.117494",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.115540",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local8] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002129",
00:00:10 "end": "2021-03-14 21:52:11.146187",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.144058",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 META: ran handlers
00:00:10 META: ran handlers
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
|
https://github.com/ansible/ansible/issues/73899
|
https://github.com/ansible/ansible/pull/73927
|
561cdf3ace593a0d0285cc3e36baaf807238c023
|
78f34786dd468c42d7a222468685590207e74679
| 2021-03-14T21:56:53Z |
python
| 2021-03-18T19:12:29Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,899 |
v2_runner_retry callbacks do not fire until next batch of hosts is started
|
### Summary
Executing a play with N hosts, when N > batch size, causes v2_runner_retry callbacks to be delayed until the following batch begins. The callbacks all fire sat the same time, not as individual retry results are delivered by TaskExecutor(). I've observed this on Ansible 2.8, 2.9. 2.10, 3.0 and current devel. Across CPython 2.7, 3.6, 3.8, 3.9 and 3.10 alpha.
### Issue Type
Bug Report
### Component Name
ansible.plugins.strategy.linear
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
config file = /home/alex/.ansible.cfg
configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alex/src/ansible/lib/ansible
ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alex/src/ansible/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed | cat
```
### OS / Environment
Ubuntu 20.04 x86_64, Python 3.8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: locals
gather_facts: false
connection: local
vars:
ansible_python_interpreter: python3
tasks:
- name: Command
command: "true"
retries: 3
delay: 2
register: result
until: result.attempts == 3
changed_when: false
```
```yaml
locals:
hosts:
local[1:8]:
vars:
connection: local
```
### Expected Results
First v2_runner_retry arrives after 2 seconds (configured `delay`), next 2 seconds after that ...
### Actual Results
All v2_runner_retry events from the first batch of hosts arrive at t=5s, the same moment that v2_runner_on_ok arrives.
```console
Β± ansible-playbook -i inventory.yml playbook.yml | ts -s
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 FAILED - RETRYING: Command (2 retries left).
00:00:05 ok: [local1]
00:00:05 ok: [local4]
00:00:05 ok: [local3]
00:00:05 ok: [local2]
00:00:05 ok: [local5]
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:05 FAILED - RETRYING: Command (3 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:08 FAILED - RETRYING: Command (2 retries left).
00:00:10 ok: [local6]
00:00:10 ok: [local7]
00:00:10 ok: [local8]
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
<details>
<summary>Full -vvvv output (click to expand)</summary>
```console
Β± ansible-playbook -i inventory.yml playbook.yml -vvvv | ts -s
00:00:00 ansible-playbook [core 2.11.0b1.post0] (devel 9ec4e08534) last updated 2021/03/14 21:31:05 (GMT +100)
00:00:00 config file = /home/alex/.ansible.cfg
00:00:00 configured module search path = ['/home/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
00:00:00 ansible python module location = /home/alex/src/ansible/lib/ansible
00:00:00 ansible collection location = /home/alex/.ansible/collections:/usr/share/ansible/collections
00:00:00 executable location = /home/alex/src/ansible/bin/ansible-playbook
00:00:00 python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
00:00:00 jinja version = 2.11.3
00:00:00 libyaml = True
00:00:00 Using /home/alex/.ansible.cfg as config file
00:00:00 setting up inventory plugins
00:00:00 host_list declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 script declined parsing /home/alex/src/ansible/inventory.yml as it did not pass its verify_file() method
00:00:00 Parsed /home/alex/src/ansible/inventory.yml inventory source with yaml plugin
00:00:00 Loading callback plugin default of type stdout, v2.0 from /home/alex/src/ansible/lib/ansible/plugins/callback/default.py
00:00:00 Skipping callback 'default', as we already have a stdout callback.
00:00:00 Skipping callback 'minimal', as we already have a stdout callback.
00:00:00 Skipping callback 'oneline', as we already have a stdout callback.
00:00:00
00:00:00 PLAYBOOK: playbook.yml *********************************************************
00:00:00 Positional arguments: playbook.yml
00:00:00 verbosity: 4
00:00:00 connection: smart
00:00:00 timeout: 10
00:00:00 become_method: sudo
00:00:00 tags: ('all',)
00:00:00 inventory: ('/home/alex/src/ansible/inventory.yml',)
00:00:00 forks: 5
00:00:00 1 plays in playbook.yml
00:00:00
00:00:00 PLAY [locals] ******************************************************************
00:00:00 META: ran handlers
00:00:00
00:00:00 TASK [Command] *****************************************************************
00:00:00 task path: /home/alex/src/ansible/playbook.yml:8
00:00:00 <local1> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" && echo ansible-tmp-1615758721.8470132-623938-280690236739822="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822 `" ) && sleep 0'
00:00:00 <local2> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local3> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" && echo ansible-tmp-1615758721.8523452-623939-208256383170342="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342 `" ) && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" && echo ansible-tmp-1615758721.8563304-623943-272057805295611="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611 `" ) && sleep 0'
00:00:00 <local4> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local5> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:00 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" && echo ansible-tmp-1615758721.8669798-623952-36636581744080="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080 `" ) && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" && echo ansible-tmp-1615758721.8724117-623970-133458959317692="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692 `" ) && sleep 0'
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpx3b9gtjo TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:00 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpng_lhkvs TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py
00:00:00 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1jhr8m0b TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py
00:00:00 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpq0kp8apk TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py
00:00:00 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpui9244cx TO /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py
00:00:00 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/AnsiballZ_command.py && sleep 0'
00:00:00 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/AnsiballZ_command.py && sleep 0'
00:00:00 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/AnsiballZ_command.py && sleep 0'
00:00:00 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/AnsiballZ_command.py && sleep 0'
00:00:00 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/AnsiballZ_command.py && sleep 0'
00:00:01 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8523452-623939-208256383170342/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8669798-623952-36636581744080/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8563304-623943-272057805295611/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8470132-623938-280690236739822/ > /dev/null 2>&1 && sleep 0'
00:00:01 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758721.8724117-623970-133458959317692/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" && echo ansible-tmp-1615758724.133097-623943-134225245942013="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013 `" ) && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" && echo ansible-tmp-1615758724.1334202-623939-32486568572513="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513 `" ) && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" && echo ansible-tmp-1615758724.1342025-623952-225237159473494="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpxczs__uw TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py
00:00:03 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprld83jd8 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py
00:00:03 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpllh03pi7 TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py
00:00:03 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/AnsiballZ_command.py && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" && echo ansible-tmp-1615758724.182092-623938-153700129389411="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411 `" ) && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" && echo ansible-tmp-1615758724.1917582-623970-217763781705178="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178 `" ) && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpwrq_y32t TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py
00:00:03 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/AnsiballZ_command.py && sleep 0'
00:00:03 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:03 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpd9rqeqzs TO /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py
00:00:03 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/AnsiballZ_command.py && sleep 0'
00:00:03 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1334202-623939-32486568572513/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1917582-623970-217763781705178/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.182092-623938-153700129389411/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.133097-623943-134225245942013/ > /dev/null 2>&1 && sleep 0'
00:00:03 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758724.1342025-623952-225237159473494/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" && echo ansible-tmp-1615758726.323465-623939-61008003621447="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local2> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmputs2x9ix TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py
00:00:05 <local2> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" && echo ansible-tmp-1615758726.3784077-623970-207559000874144="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" && echo ansible-tmp-1615758726.37912-623943-197158363936065="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065 `" ) && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" && echo ansible-tmp-1615758726.3797214-623938-273756783496564="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564 `" ) && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" && echo ansible-tmp-1615758726.3799593-623952-179250791919717="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpz3zy9vq5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py
00:00:05 <local3> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpznwnj_r2 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py
00:00:05 <local1> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local4> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp6kxublz5 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py
00:00:05 <local4> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local1> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1ck6sguq TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py
00:00:05 <local3> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/AnsiballZ_command.py && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/AnsiballZ_command.py && sleep 0'
00:00:05 <local2> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.323465-623939-61008003621447/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002349",
00:00:05 "end": "2021-03-14 21:52:02.101688",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.099339",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002415",
00:00:05 "end": "2021-03-14 21:52:02.101400",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.098985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002612",
00:00:05 "end": "2021-03-14 21:52:02.100329",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.097717",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002716",
00:00:05 "end": "2021-03-14 21:52:02.151676",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.148960",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002320",
00:00:05 "end": "2021-03-14 21:52:02.156925",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:02.154605",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002502",
00:00:05 "end": "2021-03-14 21:52:04.291273",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.288771",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:04.345135",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342722",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003319",
00:00:05 "end": "2021-03-14 21:52:04.346062",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.342743",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002964",
00:00:05 "end": "2021-03-14 21:52:04.344021",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:05 "attempts": 2,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003045",
00:00:05 "end": "2021-03-14 21:52:04.344102",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:04.341057",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 ok: [local2] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002413",
00:00:05 "end": "2021-03-14 21:52:06.491124",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.488711",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" && echo ansible-tmp-1615758726.52976-624299-143253575445545="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545 `" ) && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmprsq3rk5b TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py
00:00:05 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/AnsiballZ_command.py && sleep 0'
00:00:05 <local4> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3799593-623952-179250791919717/ > /dev/null 2>&1 && sleep 0'
00:00:05 <local5> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3784077-623970-207559000874144/ > /dev/null 2>&1 && sleep 0'
00:00:05 ok: [local4] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002178",
00:00:05 "end": "2021-03-14 21:52:06.546979",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.544801",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 ok: [local5] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003272",
00:00:05 "end": "2021-03-14 21:52:06.559002",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.555730",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" && echo ansible-tmp-1615758726.5958605-624328-160875916588848="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848 `" ) && sleep 0'
00:00:05 <local8> ESTABLISH LOCAL CONNECTION FOR USER: alex
00:00:05 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:05 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" && echo ansible-tmp-1615758726.6032434-624333-29578010259332="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332 `" ) && sleep 0'
00:00:05 <local3> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.37912-623943-197158363936065/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpcktq_nij TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py
00:00:05 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local1> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.3797214-623938-273756783496564/ > /dev/null 2>&1 && sleep 0'
00:00:05 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:05 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/AnsiballZ_command.py && sleep 0'
00:00:05 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp1gdxvsr9 TO /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py
00:00:05 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local3] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002869",
00:00:05 "end": "2021-03-14 21:52:06.593703",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.590834",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/AnsiballZ_command.py && sleep 0'
00:00:05 ok: [local1] => {
00:00:05 "attempts": 3,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.003459",
00:00:05 "end": "2021-03-14 21:52:06.599444",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "start": "2021-03-14 21:52:06.595985",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.52976-624299-143253575445545/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002021",
00:00:05 "end": "2021-03-14 21:52:06.717127",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.715106",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.5958605-624328-160875916588848/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.002013",
00:00:05 "end": "2021-03-14 21:52:06.744519",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.742506",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:05 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758726.6032434-624333-29578010259332/ > /dev/null 2>&1 && sleep 0'
00:00:05 FAILED - RETRYING: Command (3 retries left).Result was: {
00:00:05 "attempts": 1,
00:00:05 "changed": false,
00:00:05 "cmd": [
00:00:05 "true"
00:00:05 ],
00:00:05 "delta": "0:00:00.001962",
00:00:05 "end": "2021-03-14 21:52:06.780870",
00:00:05 "invocation": {
00:00:05 "module_args": {
00:00:05 "_raw_params": "true",
00:00:05 "_uses_shell": false,
00:00:05 "argv": null,
00:00:05 "chdir": null,
00:00:05 "creates": null,
00:00:05 "executable": null,
00:00:05 "removes": null,
00:00:05 "stdin": null,
00:00:05 "stdin_add_newline": true,
00:00:05 "strip_empty_ends": true,
00:00:05 "warn": false
00:00:05 }
00:00:05 },
00:00:05 "rc": 0,
00:00:05 "retries": 4,
00:00:05 "start": "2021-03-14 21:52:06.778908",
00:00:05 "stderr": "",
00:00:05 "stderr_lines": [],
00:00:05 "stdout": "",
00:00:05 "stdout_lines": []
00:00:05 }
00:00:07 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" && echo ansible-tmp-1615758728.7470558-624299-88066478691530="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmptry4pngp TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py
00:00:07 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" && echo ansible-tmp-1615758728.7725587-624328-734150765160="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160 `" ) && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/AnsiballZ_command.py && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpvmbdiifq TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py
00:00:07 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" && echo ansible-tmp-1615758728.8074439-624333-208836372401478="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478 `" ) && sleep 0'
00:00:07 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:07 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmp38xldgse TO /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py
00:00:07 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/AnsiballZ_command.py && sleep 0'
00:00:07 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7470558-624299-88066478691530/ > /dev/null 2>&1 && sleep 0'
00:00:07 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.7725587-624328-734150765160/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002026",
00:00:07 "end": "2021-03-14 21:52:08.912963",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.910937",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002287",
00:00:07 "end": "2021-03-14 21:52:08.917332",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.915045",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:07 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758728.8074439-624333-208836372401478/ > /dev/null 2>&1 && sleep 0'
00:00:07 FAILED - RETRYING: Command (2 retries left).Result was: {
00:00:07 "attempts": 2,
00:00:07 "changed": false,
00:00:07 "cmd": [
00:00:07 "true"
00:00:07 ],
00:00:07 "delta": "0:00:00.002045",
00:00:07 "end": "2021-03-14 21:52:08.958693",
00:00:07 "invocation": {
00:00:07 "module_args": {
00:00:07 "_raw_params": "true",
00:00:07 "_uses_shell": false,
00:00:07 "argv": null,
00:00:07 "chdir": null,
00:00:07 "creates": null,
00:00:07 "executable": null,
00:00:07 "removes": null,
00:00:07 "stdin": null,
00:00:07 "stdin_add_newline": true,
00:00:07 "strip_empty_ends": true,
00:00:07 "warn": false
00:00:07 }
00:00:07 },
00:00:07 "rc": 0,
00:00:07 "retries": 4,
00:00:07 "start": "2021-03-14 21:52:08.956648",
00:00:07 "stderr": "",
00:00:07 "stderr_lines": [],
00:00:07 "stdout": "",
00:00:07 "stdout_lines": []
00:00:07 }
00:00:09 <local6> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" && echo ansible-tmp-1615758730.9422395-624299-194978499025300="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300 `" ) && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" && echo ansible-tmp-1615758730.945759-624328-198677189881810="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local6> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsppl_8rz TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py
00:00:09 <local6> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local7> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpo3yr1mxu TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py
00:00:09 <local7> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local6> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/AnsiballZ_command.py && sleep 0'
00:00:09 <local7> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'echo ~alex && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/alex/.ansible/tmp `"&& mkdir "` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" && echo ansible-tmp-1615758730.9828703-624333-60079952062812="` echo /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812 `" ) && sleep 0'
00:00:09 Using module file /home/alex/src/ansible/lib/ansible/modules/command.py
00:00:09 <local8> PUT /home/alex/.ansible/tmp/ansible-local-62393288agakic/tmpsql_f_ro TO /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py
00:00:09 <local8> EXEC /bin/sh -c 'chmod u+x /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:09 <local8> EXEC /bin/sh -c 'python3 /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/AnsiballZ_command.py && sleep 0'
00:00:10 <local6> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9422395-624299-194978499025300/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local6] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002250",
00:00:10 "end": "2021-03-14 21:52:11.096291",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.094041",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local7> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.945759-624328-198677189881810/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local7] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.001954",
00:00:10 "end": "2021-03-14 21:52:11.117494",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.115540",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 <local8> EXEC /bin/sh -c 'rm -f -r /home/alex/.ansible/tmp/ansible-tmp-1615758730.9828703-624333-60079952062812/ > /dev/null 2>&1 && sleep 0'
00:00:10 ok: [local8] => {
00:00:10 "attempts": 3,
00:00:10 "changed": false,
00:00:10 "cmd": [
00:00:10 "true"
00:00:10 ],
00:00:10 "delta": "0:00:00.002129",
00:00:10 "end": "2021-03-14 21:52:11.146187",
00:00:10 "invocation": {
00:00:10 "module_args": {
00:00:10 "_raw_params": "true",
00:00:10 "_uses_shell": false,
00:00:10 "argv": null,
00:00:10 "chdir": null,
00:00:10 "creates": null,
00:00:10 "executable": null,
00:00:10 "removes": null,
00:00:10 "stdin": null,
00:00:10 "stdin_add_newline": true,
00:00:10 "strip_empty_ends": true,
00:00:10 "warn": false
00:00:10 }
00:00:10 },
00:00:10 "rc": 0,
00:00:10 "start": "2021-03-14 21:52:11.144058",
00:00:10 "stderr": "",
00:00:10 "stderr_lines": [],
00:00:10 "stdout": "",
00:00:10 "stdout_lines": []
00:00:10 }
00:00:10 META: ran handlers
00:00:10 META: ran handlers
00:00:10
00:00:10 PLAY RECAP *********************************************************************
00:00:10 local1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local4 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local5 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local6 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10 local8 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
00:00:10
```
|
https://github.com/ansible/ansible/issues/73899
|
https://github.com/ansible/ansible/pull/73927
|
561cdf3ace593a0d0285cc3e36baaf807238c023
|
78f34786dd468c42d7a222468685590207e74679
| 2021-03-14T21:56:53Z |
python
| 2021-03-18T19:12:29Z |
test/units/plugins/strategy/test_strategy.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.mock.loader import DictDataLoader
from copy import deepcopy
import uuid
from units.compat import unittest
from units.compat.mock import patch, MagicMock
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.executor.task_result import TaskResult
from ansible.inventory.host import Host
from ansible.module_utils.six.moves import queue as Queue
from ansible.playbook.handler import Handler
from ansible.plugins.strategy import StrategyBase
class TestStrategyBase(unittest.TestCase):
def test_strategy_base_init(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock(TaskQueueManager)
mock_tqm._final_q = mock_queue
mock_tqm._workers = []
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base.cleanup()
def test_strategy_base_run(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock(TaskQueueManager)
mock_tqm._final_q = mock_queue
mock_tqm._stats = MagicMock()
mock_tqm.send_callback.return_value = None
for attr in ('RUN_OK', 'RUN_ERROR', 'RUN_FAILED_HOSTS', 'RUN_UNREACHABLE_HOSTS'):
setattr(mock_tqm, attr, getattr(TaskQueueManager, attr))
mock_iterator = MagicMock()
mock_iterator._play = MagicMock()
mock_iterator._play.handlers = []
mock_play_context = MagicMock()
mock_tqm._failed_hosts = dict()
mock_tqm._unreachable_hosts = dict()
mock_tqm._workers = []
strategy_base = StrategyBase(tqm=mock_tqm)
mock_host = MagicMock()
mock_host.name = 'host1'
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context), mock_tqm.RUN_OK)
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=TaskQueueManager.RUN_ERROR), mock_tqm.RUN_ERROR)
mock_tqm._failed_hosts = dict(host1=True)
mock_iterator.get_failed_hosts.return_value = [mock_host]
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=False), mock_tqm.RUN_FAILED_HOSTS)
mock_tqm._unreachable_hosts = dict(host1=True)
mock_iterator.get_failed_hosts.return_value = []
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=False), mock_tqm.RUN_UNREACHABLE_HOSTS)
strategy_base.cleanup()
def test_strategy_base_get_hosts(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_hosts = []
for i in range(0, 5):
mock_host = MagicMock()
mock_host.name = "host%02d" % (i + 1)
mock_host.has_hostkey = True
mock_hosts.append(mock_host)
mock_hosts_names = [h.name for h in mock_hosts]
mock_inventory = MagicMock()
mock_inventory.get_hosts.return_value = mock_hosts
mock_tqm = MagicMock()
mock_tqm._final_q = mock_queue
mock_tqm.get_inventory.return_value = mock_inventory
mock_play = MagicMock()
mock_play.hosts = ["host%02d" % (i + 1) for i in range(0, 5)]
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._hosts_cache = strategy_base._hosts_cache_all = mock_hosts_names
mock_tqm._failed_hosts = []
mock_tqm._unreachable_hosts = []
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts])
mock_tqm._failed_hosts = ["host01"]
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts[1:]])
self.assertEqual(strategy_base.get_failed_hosts(play=mock_play), [mock_hosts[0].name])
mock_tqm._unreachable_hosts = ["host02"]
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts[2:]])
strategy_base.cleanup()
@patch.object(WorkerProcess, 'run')
def test_strategy_base_queue_task(self, mock_worker):
def fake_run(self):
return
mock_worker.run.side_effect = fake_run
fake_loader = DictDataLoader()
mock_var_manager = MagicMock()
mock_host = MagicMock()
mock_host.get_vars.return_value = dict()
mock_host.has_hostkey = True
mock_inventory = MagicMock()
mock_inventory.get.return_value = mock_host
tqm = TaskQueueManager(
inventory=mock_inventory,
variable_manager=mock_var_manager,
loader=fake_loader,
passwords=None,
forks=3,
)
tqm._initialize_processes(3)
tqm.hostvars = dict()
mock_task = MagicMock()
mock_task._uuid = 'abcd'
mock_task.throttle = 0
try:
strategy_base = StrategyBase(tqm=tqm)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 1)
self.assertEqual(strategy_base._pending_results, 1)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 2)
self.assertEqual(strategy_base._pending_results, 2)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 0)
self.assertEqual(strategy_base._pending_results, 3)
finally:
tqm.cleanup()
def test_strategy_base_process_pending_results(self):
mock_tqm = MagicMock()
mock_tqm._terminated = False
mock_tqm._failed_hosts = dict()
mock_tqm._unreachable_hosts = dict()
mock_tqm.send_callback.return_value = None
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm._final_q = mock_queue
mock_tqm._stats = MagicMock()
mock_tqm._stats.increment.return_value = None
mock_play = MagicMock()
mock_host = MagicMock()
mock_host.name = 'test01'
mock_host.vars = dict()
mock_host.get_vars.return_value = dict()
mock_host.has_hostkey = True
mock_task = MagicMock()
mock_task._role = None
mock_task._parent = None
mock_task.ignore_errors = False
mock_task.ignore_unreachable = False
mock_task._uuid = uuid.uuid4()
mock_task.loop = None
mock_task.copy.return_value = mock_task
mock_handler_task = Handler()
mock_handler_task.name = 'test handler'
mock_handler_task.action = 'foo'
mock_handler_task._parent = None
mock_handler_task._uuid = 'xxxxxxxxxxxxx'
mock_iterator = MagicMock()
mock_iterator._play = mock_play
mock_iterator.mark_host_failed.return_value = None
mock_iterator.get_next_task_for_host.return_value = (None, None)
mock_handler_block = MagicMock()
mock_handler_block.block = [mock_handler_task]
mock_handler_block.rescue = []
mock_handler_block.always = []
mock_play.handlers = [mock_handler_block]
mock_group = MagicMock()
mock_group.add_host.return_value = None
def _get_host(host_name):
if host_name == 'test01':
return mock_host
return None
def _get_group(group_name):
if group_name in ('all', 'foo'):
return mock_group
return None
mock_inventory = MagicMock()
mock_inventory._hosts_cache = dict()
mock_inventory.hosts.return_value = mock_host
mock_inventory.get_host.side_effect = _get_host
mock_inventory.get_group.side_effect = _get_group
mock_inventory.clear_pattern_cache.return_value = None
mock_inventory.get_host_vars.return_value = {}
mock_inventory.hosts.get.return_value = mock_host
mock_var_mgr = MagicMock()
mock_var_mgr.set_host_variable.return_value = None
mock_var_mgr.set_host_facts.return_value = None
mock_var_mgr.get_vars.return_value = dict()
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._inventory = mock_inventory
strategy_base._variable_manager = mock_var_mgr
strategy_base._blocked_hosts = dict()
def _has_dead_workers():
return False
strategy_base._tqm.has_dead_workers.side_effect = _has_dead_workers
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 0)
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(changed=True))
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
mock_queued_task_cache = {
(mock_host.name, mock_task._uuid): {
'task': mock_task,
'host': mock_host,
'task_vars': {},
'play_context': {},
}
}
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"failed":true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
mock_iterator.is_failed.return_value = True
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
# self.assertIn('test01', mock_tqm._failed_hosts)
# del mock_tqm._failed_hosts['test01']
mock_iterator.is_failed.return_value = False
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"unreachable": true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
self.assertIn('test01', mock_tqm._unreachable_hosts)
del mock_tqm._unreachable_hosts['test01']
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"skipped": true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(add_host=dict(host_name='newhost01', new_groups=['foo']))))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(add_group=dict(group_name='foo'))))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(changed=True, _ansible_notify=['test handler'])))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = deepcopy(mock_queued_task_cache)
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
self.assertTrue(mock_handler_task.is_host_notified(mock_host))
# queue_items.append(('set_host_var', mock_host, mock_task, None, 'foo', 'bar'))
# results = strategy_base._process_pending_results(iterator=mock_iterator)
# self.assertEqual(len(results), 0)
# self.assertEqual(strategy_base._pending_results, 1)
# queue_items.append(('set_host_facts', mock_host, mock_task, None, 'foo', dict()))
# results = strategy_base._process_pending_results(iterator=mock_iterator)
# self.assertEqual(len(results), 0)
# self.assertEqual(strategy_base._pending_results, 1)
# queue_items.append(('bad'))
# self.assertRaises(AnsibleError, strategy_base._process_pending_results, iterator=mock_iterator)
strategy_base.cleanup()
def test_strategy_base_load_included_file(self):
fake_loader = DictDataLoader({
"test.yml": """
- debug: msg='foo'
""",
"bad.yml": """
""",
})
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock()
mock_tqm._final_q = mock_queue
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._loader = fake_loader
strategy_base.cleanup()
mock_play = MagicMock()
mock_block = MagicMock()
mock_block._play = mock_play
mock_block.vars = dict()
mock_task = MagicMock()
mock_task._block = mock_block
mock_task._role = None
mock_task._parent = None
mock_iterator = MagicMock()
mock_iterator.mark_host_failed.return_value = None
mock_inc_file = MagicMock()
mock_inc_file._task = mock_task
mock_inc_file._filename = "test.yml"
res = strategy_base._load_included_file(included_file=mock_inc_file, iterator=mock_iterator)
mock_inc_file._filename = "bad.yml"
res = strategy_base._load_included_file(included_file=mock_inc_file, iterator=mock_iterator)
self.assertEqual(res, [])
@patch.object(WorkerProcess, 'run')
def test_strategy_base_run_handlers(self, mock_worker):
def fake_run(*args):
return
mock_worker.side_effect = fake_run
mock_play_context = MagicMock()
mock_handler_task = Handler()
mock_handler_task.action = 'foo'
mock_handler_task.cached_name = False
mock_handler_task.name = "test handler"
mock_handler_task.listen = []
mock_handler_task._role = None
mock_handler_task._parent = None
mock_handler_task._uuid = 'xxxxxxxxxxxxxxxx'
mock_handler = MagicMock()
mock_handler.block = [mock_handler_task]
mock_handler.flag_for_host.return_value = False
mock_play = MagicMock()
mock_play.handlers = [mock_handler]
mock_host = MagicMock(Host)
mock_host.name = "test01"
mock_host.has_hostkey = True
mock_inventory = MagicMock()
mock_inventory.get_hosts.return_value = [mock_host]
mock_inventory.get.return_value = mock_host
mock_inventory.get_host.return_value = mock_host
mock_var_mgr = MagicMock()
mock_var_mgr.get_vars.return_value = dict()
mock_iterator = MagicMock()
mock_iterator._play = mock_play
fake_loader = DictDataLoader()
tqm = TaskQueueManager(
inventory=mock_inventory,
variable_manager=mock_var_mgr,
loader=fake_loader,
passwords=None,
forks=5,
)
tqm._initialize_processes(3)
tqm.hostvars = dict()
try:
strategy_base = StrategyBase(tqm=tqm)
strategy_base._inventory = mock_inventory
task_result = TaskResult(mock_host.name, mock_handler_task._uuid, dict(changed=False))
strategy_base._queued_task_cache = dict()
strategy_base._queued_task_cache[(mock_host.name, mock_handler_task._uuid)] = {
'task': mock_handler_task,
'host': mock_host,
'task_vars': {},
'play_context': mock_play_context
}
tqm._final_q.put(task_result)
result = strategy_base.run_handlers(iterator=mock_iterator, play_context=mock_play_context)
finally:
strategy_base.cleanup()
tqm.cleanup()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,372 |
"file" docs reference non-existent param in other modules
|
##### SUMMARY
See https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html, where for the parameter "state" it says:
"see the touch value or the ansible.builtin.copy or ansible.builtin.template module"
Neither
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html#ansible-collections-ansible-builtin-copy-module
nor
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#ansible-collections-ansible-builtin-template-module
have anything about "touch".
The "file" page also says "Even with other options (i.e mode), the file will be modified but will NOT be created if it does not exist", but a sentence later it describes "touch" value for this same parameter.
Perhaps the "see..." reference should be removed, and the note reworded like this?
"Even with other options (i.e mode), the file will be modified but will NOT be created if it does not exist. Use state touch to have the file created."
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
file
##### ANSIBLE VERSION
latest
|
https://github.com/ansible/ansible/issues/72372
|
https://github.com/ansible/ansible/pull/73938
|
ccd9a992cf7b7b057cfd9a2d06c69c44b7094696
|
7a55e98d299863859a532811eaa0bf5ed4c6c91d
| 2020-10-28T08:32:16Z |
python
| 2021-03-18T19:34:04Z |
lib/ansible/modules/file.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: file
version_added: historical
short_description: Manage files and file properties
extends_documentation_fragment: files
description:
- Set attributes of files, symlinks or directories.
- Alternatively, remove files, symlinks or directories.
- Many other modules support the same options as the C(file) module - including M(ansible.builtin.copy),
M(ansible.builtin.template), and M(ansible.builtin.assemble).
- For Windows targets, use the M(ansible.windows.win_file) module instead.
options:
path:
description:
- Path to the file being managed.
type: path
required: yes
aliases: [ dest, name ]
state:
description:
- If C(absent), directories will be recursively deleted, and files or symlinks will
be unlinked. In the case of a directory, if C(diff) is declared, you will see the files and folders deleted listed
under C(path_contents). Note that C(absent) will not cause C(file) to fail if the C(path) does
not exist as the state did not change.
- If C(directory), all intermediate subdirectories will be created if they
do not exist. Since Ansible 1.7 they will be created with the supplied permissions.
- If C(file), without any other options this works mostly as a 'stat' and will return the current state of C(path).
Even with other options (i.e C(mode)), the file will be modified but will NOT be created if it does not exist;
see the C(touch) value or the M(ansible.builtin.copy) or M(ansible.builtin.template) module if you want that behavior.
- If C(hard), the hard link will be created or changed.
- If C(link), the symbolic link will be created or changed.
- If C(touch) (new in 1.4), an empty file will be created if the C(path) does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way C(touch) works from the command line).
type: str
default: file
choices: [ absent, directory, file, hard, link, touch ]
src:
description:
- Path of the file to link to.
- This applies only to C(state=link) and C(state=hard).
- For C(state=link), this will also accept a non-existing path.
- Relative paths are relative to the file being created (C(path)) which is how
the Unix command C(ln -s SRC DEST) treats relative paths.
type: path
recurse:
description:
- Recursively set the specified file attributes on directory contents.
- This applies only when C(state) is set to C(directory).
type: bool
default: no
version_added: '1.1'
force:
description:
- >
Force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
C(path) file and create symlink to the C(src) file in place of it).
type: bool
default: no
follow:
description:
- This flag indicates that filesystem links, if they exist, should be followed.
- Previous to Ansible 2.5, this was C(no) by default.
type: bool
default: yes
version_added: '1.8'
modification_time:
description:
- This parameter indicates the time the file's modification time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is None meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: "2.7"
modification_time_format:
description:
- When used with C(modification_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
access_time:
description:
- This parameter indicates the time the file's access time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is C(None) meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: '2.7'
access_time_format:
description:
- When used with C(access_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.copy
- module: ansible.builtin.stat
- module: ansible.builtin.template
- module: ansible.windows.win_file
notes:
- Supports C(check_mode).
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Change file ownership, group and permissions
ansible.builtin.file:
path: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Give insecure permissions to an existing file
ansible.builtin.file:
path: /work
owner: root
group: root
mode: '1777'
- name: Create a symbolic link
ansible.builtin.file:
src: /file/to/link/to
dest: /path/to/symlink
owner: foo
group: foo
state: link
- name: Create two hard links
ansible.builtin.file:
src: '/tmp/{{ item.src }}'
dest: '{{ item.dest }}'
state: hard
loop:
- { src: x, dest: y }
- { src: z, dest: k }
- name: Touch a file, using symbolic modes to set the permissions (equivalent to 0644)
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u=rw,g=r,o=r
- name: Touch the same file, but add/remove some permissions
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
- name: Touch again the same file, but do not change times this makes the task idempotent
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
modification_time: preserve
access_time: preserve
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /etc/some_directory
state: directory
mode: '0755'
- name: Update modification and access time of given file
ansible.builtin.file:
path: /etc/some_file
state: file
modification_time: now
access_time: now
- name: Set access time based on seconds from epoch value
ansible.builtin.file:
path: /etc/another_file
state: file
access_time: '{{ "%Y%m%d%H%M.%S" | strftime(stat_var.stat.atime) }}'
- name: Recursively change ownership of a directory
ansible.builtin.file:
path: /etc/foo
state: directory
recurse: yes
owner: foo
group: foo
- name: Remove file (delete file)
ansible.builtin.file:
path: /etc/foo.txt
state: absent
- name: Recursively remove directory
ansible.builtin.file:
path: /etc/foo
state: absent
'''
RETURN = r'''
dest:
description: Destination file/path, equal to the value passed to I(path).
returned: state=touch, state=hard, state=link
type: str
sample: /path/to/file.txt
path:
description: Destination file/path, equal to the value passed to I(path).
returned: state=absent, state=directory, state=file
type: str
sample: /path/to/file.txt
'''
import errno
import os
import shutil
import sys
import time
from pwd import getpwnam, getpwuid
from grp import getgrnam, getgrgid
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
# There will only be a single AnsibleModule object per module
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
def __repr__(self):
return 'AnsibleModuleError(results={0})'.format(self.results)
class ParameterError(AnsibleModuleError):
pass
class Sentinel(object):
def __new__(cls, *args, **kwargs):
return cls
def _ansible_excepthook(exc_type, exc_value, tb):
# Using an exception allows us to catch it if the calling code knows it can recover
if issubclass(exc_type, AnsibleModuleError):
module.fail_json(**exc_value.results)
else:
sys.__excepthook__(exc_type, exc_value, tb)
def additional_parameter_handling(params):
"""Additional parameter validation and reformatting"""
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link? Why don't we exclude directory? Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent: Remove the directory
# if state == touch: Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file: place inside of the directory (use _original_basename)
# if state == link: place inside of the directory (use _original_basename. Fallback to src?)
# if state == hard: place inside of the directory (use _original_basename. Fallback to src?)
if (params['state'] not in ("link", "absent") and os.path.isdir(to_bytes(params['path'], errors='surrogate_or_strict'))):
basename = None
if params['_original_basename']:
basename = params['_original_basename']
elif params['src']:
basename = os.path.basename(params['src'])
if basename:
params['path'] = os.path.join(params['path'], basename)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
prev_state = get_state(to_bytes(params['path'], errors='surrogate_or_strict'))
if params['state'] is None:
if prev_state != 'absent':
params['state'] = prev_state
elif params['recurse']:
params['state'] = 'directory'
else:
params['state'] = 'file'
# make sure the target path is a directory when we're doing a recursive operation
if params['recurse'] and params['state'] != 'directory':
raise ParameterError(results={"msg": "recurse option requires state to be 'directory'",
"path": params["path"]})
# Fail if 'src' but no 'state' is specified
if params['src'] and params['state'] not in ('link', 'hard'):
raise ParameterError(results={'msg': "src option requires state to be 'link' or 'hard'",
'path': params['path']})
def get_state(path):
''' Find out current state '''
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
if os.path.lexists(b_path):
if os.path.islink(b_path):
return 'link'
elif os.path.isdir(b_path):
return 'directory'
elif os.stat(b_path).st_nlink > 1:
return 'hard'
# could be many other things, but defaulting to file
return 'file'
return 'absent'
except OSError as e:
if e.errno == errno.ENOENT: # It may already have been removed
return 'absent'
else:
raise
# This should be moved into the common file utilities
def recursive_set_attributes(b_path, follow, file_args, mtime, atime):
changed = False
try:
for b_root, b_dirs, b_files in os.walk(b_path):
for b_fsobj in b_dirs + b_files:
b_fsname = os.path.join(b_root, b_fsobj)
if not os.path.islink(b_fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
else:
# Change perms on the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
if follow:
b_fsname = os.path.join(b_root, os.readlink(b_fsname))
# The link target could be nonexistent
if os.path.exists(b_fsname):
if os.path.isdir(b_fsname):
# Link is a directory so change perms on the directory's contents
changed |= recursive_set_attributes(b_fsname, follow, file_args, mtime, atime)
# Change perms on the file pointed to by the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
except RuntimeError as e:
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
raise AnsibleModuleError(
results={'msg': "Could not recursively set attributes on %s. Original error was: '%s'" % (to_native(b_path), to_native(e))}
)
return changed
def initial_diff(path, state, prev_state):
diff = {'before': {'path': path},
'after': {'path': path},
}
if prev_state != state:
diff['before']['state'] = prev_state
diff['after']['state'] = state
if state == 'absent' and prev_state == 'directory':
walklist = {
'directories': [],
'files': [],
}
b_path = to_bytes(path, errors='surrogate_or_strict')
for base_path, sub_folders, files in os.walk(b_path):
for folder in sub_folders:
folderpath = os.path.join(base_path, folder)
walklist['directories'].append(folderpath)
for filename in files:
filepath = os.path.join(base_path, filename)
walklist['files'].append(filepath)
diff['before']['path_content'] = walklist
return diff
#
# States
#
def get_timestamp_for_time(formatted_time, time_format):
if formatted_time == 'preserve':
return None
elif formatted_time == 'now':
return Sentinel
else:
try:
struct = time.strptime(formatted_time, time_format)
struct_time = time.mktime(struct)
except (ValueError, OverflowError) as e:
raise AnsibleModuleError(results={'msg': 'Error while obtaining timestamp for time %s using format %s: %s'
% (formatted_time, time_format, to_native(e, nonstring='simplerepr'))})
return struct_time
def update_timestamp_for_file(path, mtime, atime, diff=None):
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
if mtime is Sentinel and atime is Sentinel:
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
mtime = atime = time.time()
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
set_time = None
else:
# If both parameters are None 'preserve', nothing to do
if mtime is None and atime is None:
return False
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
if mtime is None:
mtime = previous_mtime
elif mtime is Sentinel:
mtime = time.time()
if atime is None:
atime = previous_atime
elif atime is Sentinel:
atime = time.time()
# If both timestamps are already ok, nothing to do
if mtime == previous_mtime and atime == previous_atime:
return False
set_time = (atime, mtime)
os.utime(b_path, set_time)
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
if 'after' not in diff:
diff['after'] = {}
if mtime != previous_mtime:
diff['before']['mtime'] = previous_mtime
diff['after']['mtime'] = mtime
if atime != previous_atime:
diff['before']['atime'] = previous_atime
diff['after']['atime'] = atime
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while updating modification or access time: %s'
% to_native(e, nonstring='simplerepr'), 'path': path})
return True
def keep_backward_compatibility_on_timestamps(parameter, state):
if state in ['file', 'hard', 'directory', 'link'] and parameter is None:
return 'preserve'
elif state == 'touch' and parameter is None:
return 'now'
else:
return parameter
def execute_diff_peek(path):
"""Take a guess as to whether a file is a binary file"""
b_path = to_bytes(path, errors='surrogate_or_strict')
appears_binary = False
try:
with open(b_path, 'rb') as f:
head = f.read(8192)
except Exception:
# If we can't read the file, we're okay assuming it's text
pass
else:
if b"\x00" in head:
appears_binary = True
return appears_binary
def ensure_absent(path):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
result = {}
if prev_state != 'absent':
diff = initial_diff(path, 'absent', prev_state)
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(b_path, ignore_errors=False)
except Exception as e:
raise AnsibleModuleError(results={'msg': "rmtree failed: %s" % to_native(e)})
else:
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise AnsibleModuleError(results={'msg': "unlinking failed: %s " % to_native(e),
'path': path})
result.update({'path': path, 'changed': True, 'diff': diff, 'state': 'absent'})
else:
result.update({'path': path, 'changed': False, 'state': 'absent'})
return result
def execute_touch(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
changed = False
result = {'dest': path}
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if not module.check_mode:
if prev_state == 'absent':
# Create an empty file if the filename did not already exist
try:
open(b_path, 'wb').close()
changed = True
except (OSError, IOError) as e:
raise AnsibleModuleError(results={'msg': 'Error, could not touch target: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
# Update the attributes on the file
diff = initial_diff(path, 'touch', prev_state)
file_args = module.load_file_common_arguments(module.params)
try:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except SystemExit as e:
if e.code: # this is the exit code passed to sys.exit, not a constant -- pylint: disable=using-constant-test
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(b_path)
raise
result['changed'] = changed
result['diff'] = diff
return result
def ensure_file_attributes(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if prev_state != 'file':
if follow and prev_state == 'link':
# follow symlink and operate on original
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
prev_state = get_state(b_path)
file_args['path'] = path
if prev_state not in ('file', 'hard'):
# file is not absent and any other state is a conflict
raise AnsibleModuleError(results={'msg': 'file (%s) is %s, cannot continue' % (path, prev_state),
'path': path, 'state': prev_state})
diff = initial_diff(path, 'file', prev_state)
changed = module.set_fs_attributes_if_different(file_args, False, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_directory(path, follow, recurse, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# For followed symlinks, we need to operate on the target of the link
if follow and prev_state == 'link':
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
file_args['path'] = path
prev_state = get_state(b_path)
changed = False
diff = initial_diff(path, 'directory', prev_state)
if prev_state == 'absent':
# Create directory and assign permissions to it
if module.check_mode:
return {'path': path, 'changed': True, 'diff': diff}
curpath = ''
try:
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
b_curpath = to_bytes(curpath, errors='surrogate_or_strict')
if not os.path.exists(b_curpath):
try:
os.mkdir(b_curpath)
changed = True
except OSError as ex:
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
if not (ex.errno == errno.EEXIST and os.path.isdir(b_curpath)):
raise
tmp_file_args = file_args.copy()
tmp_file_args['path'] = curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except Exception as e:
raise AnsibleModuleError(results={'msg': 'There was an issue creating %s as requested:'
' %s' % (curpath, to_native(e)),
'path': path})
return {'path': path, 'changed': changed, 'diff': diff}
elif prev_state != 'directory':
# We already know prev_state is not 'absent', therefore it exists in some form.
raise AnsibleModuleError(results={'msg': '%s already exists as a %s' % (path, prev_state),
'path': path})
#
# previous state == directory
#
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
if recurse:
changed |= recursive_set_attributes(b_path, follow, file_args, mtime, atime)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_symlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is None:
if follow:
# use the current target of the link as the source
src = to_native(os.readlink(b_path), errors='strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
if not os.path.islink(b_path) and os.path.isdir(b_path):
relpath = path
else:
b_relpath = os.path.dirname(b_path)
relpath = to_native(b_relpath, errors='strict')
absrc = os.path.join(relpath, src)
b_absrc = to_bytes(absrc, errors='surrogate_or_strict')
if not force and not os.path.exists(b_absrc):
raise AnsibleModuleError(results={'msg': 'src file does not exist, use "force=yes" if you'
' really want to create the link: %s' % absrc,
'path': path, 'src': src})
if prev_state == 'directory':
if not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
elif os.listdir(b_path):
# refuse to replace a directory that has files in it
raise AnsibleModuleError(results={'msg': 'the directory %s is not empty, refusing to'
' convert it' % path,
'path': path})
elif prev_state in ('file', 'hard') and not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
diff = initial_diff(path, 'link', prev_state)
changed = False
if prev_state in ('hard', 'file', 'directory', 'absent'):
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
os.rmdir(b_path)
os.symlink(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.symlink(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
file_args = module.load_file_common_arguments(module.params)
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it. Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
if follow and os.path.islink(b_path) and not os.path.exists(file_args['path']):
module.warn('Cannot set fs attributes on a non-existent symlink target. follow should be'
' set to False to avoid this.')
else:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def ensure_hardlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# src is the source of a hardlink. We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
if src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new hardlinks'})
if not os.path.exists(b_src):
raise AnsibleModuleError(results={'msg': 'src does not exist', 'dest': path, 'src': src})
diff = initial_diff(path, 'hard', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
if not os.stat(b_path).st_ino == os.stat(b_src).st_ino:
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, different hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, %s exists at destination' % prev_state,
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if os.stat(b_path).st_ino == os.stat(b_src).st_ino:
return {'path': path, 'changed': False}
elif not force:
raise AnsibleModuleError(results={'msg': 'Cannot link: different hard link exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
if os.path.exists(b_path):
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise
os.link(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.link(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def check_owner_exists(module, owner):
try:
uid = int(owner)
try:
getpwuid(uid).pw_name
except KeyError:
module.warn('failed to look up user with uid %s. Create user up to this point in real play' % uid)
except ValueError:
try:
getpwnam(owner).pw_uid
except KeyError:
module.warn('failed to look up user %s. Create user up to this point in real play' % owner)
def check_group_exists(module, group):
try:
gid = int(group)
try:
getgrgid(gid).gr_name
except KeyError:
module.warn('failed to look up group with gid %s. Create group up to this point in real play' % gid)
except ValueError:
try:
getgrnam(group).gr_gid
except KeyError:
module.warn('failed to look up group %s. Create group up to this point in real play' % group)
def main():
global module
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['absent', 'directory', 'file', 'hard', 'link', 'touch']),
path=dict(type='path', required=True, aliases=['dest', 'name']),
_original_basename=dict(type='str'), # Internal use only, for recursive ops
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False), # Note: Should not be in file_common_args in future
follow=dict(type='bool', default=True), # Note: Different default than file_common_args
_diff_peek=dict(type='bool'), # Internal use only, for internal checks in the action plugins
src=dict(type='path'), # Note: Should not be in file_common_args in future
modification_time=dict(type='str'),
modification_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
access_time=dict(type='str'),
access_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
),
add_file_common_args=True,
supports_check_mode=True,
)
# When we rewrite basic.py, we will do something similar to this on instantiating an AnsibleModule
sys.excepthook = _ansible_excepthook
additional_parameter_handling(module.params)
params = module.params
state = params['state']
recurse = params['recurse']
force = params['force']
follow = params['follow']
path = params['path']
src = params['src']
if module.check_mode and state != 'absent':
file_args = module.load_file_common_arguments(module.params)
if file_args['owner']:
check_owner_exists(module, file_args['owner'])
if file_args['group']:
check_group_exists(module, file_args['group'])
timestamps = {}
timestamps['modification_time'] = keep_backward_compatibility_on_timestamps(params['modification_time'], state)
timestamps['modification_time_format'] = params['modification_time_format']
timestamps['access_time'] = keep_backward_compatibility_on_timestamps(params['access_time'], state)
timestamps['access_time_format'] = params['access_time_format']
# short-circuit for diff_peek
if params['_diff_peek'] is not None:
appears_binary = execute_diff_peek(to_bytes(path, errors='surrogate_or_strict'))
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
if state == 'file':
result = ensure_file_attributes(path, follow, timestamps)
elif state == 'directory':
result = ensure_directory(path, follow, recurse, timestamps)
elif state == 'link':
result = ensure_symlink(path, src, follow, force, timestamps)
elif state == 'hard':
result = ensure_hardlink(path, src, follow, force, timestamps)
elif state == 'touch':
result = execute_touch(path, follow, timestamps)
elif state == 'absent':
result = ensure_absent(path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,946 |
#73767 breaks detection of Amazon Linux 2
|
### Summary
Ansible no longer returns distribution facts for my Amazon Linux 2 instances after #73767.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b2.post0] (detached HEAD 30c465c1a9) last updated 2021/03/17 18:53:46 (GMT -400)
config file = /home/ec2-user/ansible-aws-coll/ansible/ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/tmp/ansible/lib/ansible
ansible collection location = /home/ec2-user/ansible-aws-coll/ansible/collections
executable location = /home/ec2-user/tmp/ansible/bin/ansible
python version = 3.8.5 (default, Feb 18 2021, 01:24:20) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEVEL_WARNING(env: ANSIBLE_DEVEL_WARNING) = False
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
```
ansible localhost -m setup | grep distribution
```
### Expected Results
With 78d3810fdf:
```
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_distribution": "Amazon",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/system-release",
"ansible_distribution_file_variety": "Amazon",
"ansible_distribution_major_version": "2",
"ansible_distribution_release": "NA",
"ansible_distribution_version": "2",
```
### Actual Results
```console (paste below)
[WARNING]: No inventory was parsed, only implicit localhost is available
```
|
https://github.com/ansible/ansible/issues/73946
|
https://github.com/ansible/ansible/pull/73947
|
ad5ee1542a59042bc4a98115d953db9bdb91eceb
|
3811fddede11f5493e6de8136a6acdd1232d32f3
| 2021-03-17T23:04:03Z |
python
| 2021-03-18T19:42:04Z |
changelogs/fragments/73946_amazon_linux.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,946 |
#73767 breaks detection of Amazon Linux 2
|
### Summary
Ansible no longer returns distribution facts for my Amazon Linux 2 instances after #73767.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b2.post0] (detached HEAD 30c465c1a9) last updated 2021/03/17 18:53:46 (GMT -400)
config file = /home/ec2-user/ansible-aws-coll/ansible/ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/tmp/ansible/lib/ansible
ansible collection location = /home/ec2-user/ansible-aws-coll/ansible/collections
executable location = /home/ec2-user/tmp/ansible/bin/ansible
python version = 3.8.5 (default, Feb 18 2021, 01:24:20) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEVEL_WARNING(env: ANSIBLE_DEVEL_WARNING) = False
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
```
ansible localhost -m setup | grep distribution
```
### Expected Results
With 78d3810fdf:
```
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_distribution": "Amazon",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/system-release",
"ansible_distribution_file_variety": "Amazon",
"ansible_distribution_major_version": "2",
"ansible_distribution_release": "NA",
"ansible_distribution_version": "2",
```
### Actual Results
```console (paste below)
[WARNING]: No inventory was parsed, only implicit localhost is available
```
|
https://github.com/ansible/ansible/issues/73946
|
https://github.com/ansible/ansible/pull/73947
|
ad5ee1542a59042bc4a98115d953db9bdb91eceb
|
3811fddede11f5493e6de8136a6acdd1232d32f3
| 2021-03-17T23:04:03Z |
python
| 2021-03-18T19:42:04Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/centos-release', 'name': 'CentOS'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/os-release', 'name': 'Amazon'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
if path == '/etc/os-release':
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
amazon_facts['distribution_version'] = version.group(1)
amazon_facts['distribution_major_version'] = version.group(1).split('.')[0]
amazon_facts['distribution_minor_version'] = version.group(1).split('.')[1]
else:
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
def parse_distribution_file_CentOS(self, name, data, path, collected_facts):
centos_facts = {}
if 'CentOS Stream' in data:
centos_facts['distribution_release'] = 'Stream'
return True, centos_facts
return False, centos_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler', 'AlmaLinux'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family'])
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,946 |
#73767 breaks detection of Amazon Linux 2
|
### Summary
Ansible no longer returns distribution facts for my Amazon Linux 2 instances after #73767.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b2.post0] (detached HEAD 30c465c1a9) last updated 2021/03/17 18:53:46 (GMT -400)
config file = /home/ec2-user/ansible-aws-coll/ansible/ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/tmp/ansible/lib/ansible
ansible collection location = /home/ec2-user/ansible-aws-coll/ansible/collections
executable location = /home/ec2-user/tmp/ansible/bin/ansible
python version = 3.8.5 (default, Feb 18 2021, 01:24:20) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEVEL_WARNING(env: ANSIBLE_DEVEL_WARNING) = False
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
```
ansible localhost -m setup | grep distribution
```
### Expected Results
With 78d3810fdf:
```
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_distribution": "Amazon",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/system-release",
"ansible_distribution_file_variety": "Amazon",
"ansible_distribution_major_version": "2",
"ansible_distribution_release": "NA",
"ansible_distribution_version": "2",
```
### Actual Results
```console (paste below)
[WARNING]: No inventory was parsed, only implicit localhost is available
```
|
https://github.com/ansible/ansible/issues/73946
|
https://github.com/ansible/ansible/pull/73947
|
ad5ee1542a59042bc4a98115d953db9bdb91eceb
|
3811fddede11f5493e6de8136a6acdd1232d32f3
| 2021-03-17T23:04:03Z |
python
| 2021-03-18T19:42:04Z |
test/units/module_utils/facts/system/distribution/fixtures/amazon_linux_2.json
|
{
"platform.dist": [
"",
"",
""
],
"input": {
"/etc/system-release": "Amazon Linux release 2",
"/etc/os-release": ""
},
"name": "Amazon Linux 2",
"result": {
"distribution_release": "NA",
"distribution": "Amazon",
"distribution_major_version": "2",
"os_family": "RedHat",
"distribution_version": "2"
},
"distro": {
"id": "amzn",
"version": "2",
"codename": "",
"os_release_info": {
"name": "Amazon Linux AMI",
"ansi_color": "0;33",
"id_like": "rhel fedora",
"version_id": "2",
"pretty_name": "Amazon Linux release 2",
"version": "2",
"home_url": "",
"id": "amzn"
}
}
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,946 |
#73767 breaks detection of Amazon Linux 2
|
### Summary
Ansible no longer returns distribution facts for my Amazon Linux 2 instances after #73767.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.11.0b2.post0] (detached HEAD 30c465c1a9) last updated 2021/03/17 18:53:46 (GMT -400)
config file = /home/ec2-user/ansible-aws-coll/ansible/ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ec2-user/tmp/ansible/lib/ansible
ansible collection location = /home/ec2-user/ansible-aws-coll/ansible/collections
executable location = /home/ec2-user/tmp/ansible/bin/ansible
python version = 3.8.5 (default, Feb 18 2021, 01:24:20) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEVEL_WARNING(env: ANSIBLE_DEVEL_WARNING) = False
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
```
ansible localhost -m setup | grep distribution
```
### Expected Results
With 78d3810fdf:
```
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_distribution": "Amazon",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/system-release",
"ansible_distribution_file_variety": "Amazon",
"ansible_distribution_major_version": "2",
"ansible_distribution_release": "NA",
"ansible_distribution_version": "2",
```
### Actual Results
```console (paste below)
[WARNING]: No inventory was parsed, only implicit localhost is available
```
|
https://github.com/ansible/ansible/issues/73946
|
https://github.com/ansible/ansible/pull/73947
|
ad5ee1542a59042bc4a98115d953db9bdb91eceb
|
3811fddede11f5493e6de8136a6acdd1232d32f3
| 2021-03-17T23:04:03Z |
python
| 2021-03-18T19:42:04Z |
test/units/module_utils/facts/system/distribution/fixtures/amazon_linux_release_2.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,459 |
Document ansible.legacy, and the restrictions of content accessible by default to roles in collections
|
##### SUMMARY
We have encountered a problem with custom modules.
Our case is the following:
- we have custom modules, they are, essentially, go binaries
- we are distributing them as packages that install the binaries to /usr/share/ansible/plugins/modules
We are embracing ansible 2.10 and we have moved all roles that use these custom modules to collections. As a result, our roles can no longer find the modules. We have to move the binaries to the corresponding collection path manually.
As the path to the installed collection depends on ansible.cfg, we cannot modify our packages to supply another installation path.
Is there a sane way of making roles that belong to collections recognize paths to custom modules?
We believe this case has to reflected in the docs.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ansible, collection, modules
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.5
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/73459
|
https://github.com/ansible/ansible/pull/73942
|
5dbcaa4c01bd297d882157e631811c318ca92275
|
c66cff444c88b1f7b8d3f01cb6fec07fffe3d52e
| 2021-02-03T12:17:48Z |
python
| 2021-03-25T20:36:46Z |
docs/docsite/rst/dev_guide/migrating_roles.rst
|
.. _migrating_roles:
*************************************************
Migrating Roles to Roles in Collections on Galaxy
*************************************************
You can migrate any existing standalone role into a collection and host the collection on Galaxy. With Ansible collections, you can distribute many roles in a single cohesive unit of re-usable automation. Inside a collection, you can share custom plugins across all roles in the collection instead of duplicating them in each role's :file:`library/`` directory.
You must migrate roles to collections if you want to distribute them as certified Ansible content.
.. note::
If you want to import your collection to Galaxy, you need a `Galaxy namespace <https://galaxy.ansible.com/docs/contributing/namespaces.html>`_.
See :ref:`developing_collections` for details on collections.
.. contents::
:local:
:depth: 1
Comparing standalone roles to collection roles
===============================================
:ref:`Standalone roles <playbooks_reuse_roles>` have the following directory structure:
.. code-block:: bash
:emphasize-lines: 5,7,8
role/
βββ defaults
βββ files
βββ handlers
βββ library
βββ meta
βββ module_utils
βββ [*_plugins]
βββ tasks
βββ templates
βββ tests
βββ vars
The highlighted directories above will change when you migrate to a collection-based role. The collection directory structure includes a :file:`roles/` directory:
.. code-block:: bash
mynamespace/
βββ mycollection/
βββ docs/
βββ galaxy.yml
βββ plugins/
β βββ modules/
β β βββ module1.py
β βββ inventory/
β βββ .../
βββ README.md
βββ roles/
β βββ role1/
β βββ role2/
β βββ .../
βββ playbooks/
β βββ files/
β βββ vars/
β βββ templates/
β βββ tasks/
βββ tests/
You will need to use the Fully Qualified Collection Name (FQCN) to use the roles and plugins when you migrate your role into a collection. The FQCN is the combination of the collection ``namespace``, collection ``name``, and the content item you are referring to.
So for example, in the above collection, the FQCN to access ``role1`` would be:
.. code-block:: Python
mynamespace.mycollection.role1
A collection can contain one or more roles in the :file:`roles/` directory and these are almost identical to standalone roles, except you need to move plugins out of the individual roles, and use the :abbr:`FQCN (Fully Qualified Collection Name)` in some places, as detailed in the next section.
.. note::
In standalone roles, some of the plugin directories referenced their plugin types in the plural sense; this is not the case in collections.
.. _simple_roles_in_collections:
Migrating a role to a collection
=================================
To migrate from a standalone role that contains no plugins to a collection role:
1. Create a local :file:`ansible_collections` directory and ``cd`` to this new directory.
2. Create a collection. If you want to import this collection to Ansible Galaxy, you need a `Galaxy namespace <https://galaxy.ansible.com/docs/contributing/namespaces.html>`_.
.. code-block:: bash
$ ansible-galaxy collection init mynamespace.mycollection
This creates the collection directory structure.
3. Copy the standalone role directory into the :file:`roles/` subdirectory of the collection. Roles in collections cannot have hyphens in the role name. Rename any such roles to use underscores instead.
.. code-block:: bash
$ mkdir mynamespace/mycollection/roles/my_role/
$ cp -r /path/to/standalone/role/mynamespace/my_role/\* mynamespace/mycollection/roles/my_role/
4. Update ``galaxy.yml`` to include any role dependencies.
5. Update the collection README.md file to add links to any role README.md files.
.. _complex_roles_in_collections:
Migrating a role with plugins to a collection
==============================================
To migrate from a standalone role that has plugins to a collection role:
1. Create a local :file:`ansible_collections directory` and ``cd`` to this new directory.
2. Create a collection. If you want to import this collection to Ansible Galaxy, you need a `Galaxy namespace <https://galaxy.ansible.com/docs/contributing/namespaces.html>`_.
.. code-block:: bash
$ ansible-galaxy collection init mynamespace.mycollection
This creates the collection directory structure.
3. Copy the standalone role directory into the :file:`roles/` subdirectory of the collection. Roles in collections cannot have hyphens in the role name. Rename any such roles to use underscores instead.
.. code-block:: bash
$ mkdir mynamespace/mycollection/roles/my_role/
$ cp -r /path/to/standalone/role/mynamespace/my_role/\* mynamespace/mycollection/roles/my_role/
4. Move any modules to the :file:`plugins/modules/` directory.
.. code-block:: bash
$ mv -r mynamespace/mycollection/roles/my_role/library/\* mynamespace/mycollection/plugins/modules/
5. Move any other plugins to the appropriate :file:`plugins/PLUGINTYPE/` directory. See :ref:`migrating_plugins_collection` for additional steps that may be required.
6. Update ``galaxy.yml`` to include any role dependencies.
7. Update the collection README.md file to add links to any role README.md files.
8. Change any references to the role to use the :abbr:`FQCN (Fully Qualified Collection Name)`.
.. code-block:: yaml
---
- name: example role by FQCN
hosts: some_host_pattern
tasks:
- name: import FQCN role from a collection
import_role:
name: mynamespace.mycollection.my_role
You can alternately use the ``collections`` keyword to simplify this:
.. code-block:: yaml
---
- name: example role by FQCN
hosts: some_host_pattern
collections:
- mynamespace.mycollection
tasks:
- name: import role from a collection
import_role:
name: my_role
.. _migrating_plugins_collection:
Migrating other role plugins to a collection
---------------------------------------------
To migrate other role plugins to a collection:
1. Move each nonmodule plugins to the appropriate :file:`plugins/PLUGINTYPE/` directory. The :file:`mynamespace/mycollection/plugins/README.md` file explains the types of plugins that the collection can contain within optionally created subdirectories.
.. code-block:: bash
$ mv -r mynamespace/mycollection/roles/my_role/filter_plugins/\* mynamespace/mycollection/plugins/filter/
2. Update documentation to use the FQCN. Plugins that use ``doc_fragments`` need to use FQCN (for example, ``mydocfrag`` becomes ``mynamespace.mycollection.mydocfrag``).
3. Update relative imports work in collections to start with a period. For example, :file:`./filename` and :file:`../asdfu/filestuff` works but :file:`filename` in same directory must be updated to :file:`./filename`.
If you have a custom ``module_utils`` or import from ``__init__.py``, you must also:
#. Change the Python namespace for custom ``module_utils`` to use the :abbr:`FQCN (Fully Qualified Collection Name)` along with the ``ansible_collections`` convention. See :ref:`update_module_utils_role`.
#. Change how you import from ``__init__.py``. See :ref:`update_init_role`.
.. _update_module_utils_role:
Updating ``module_utils``
^^^^^^^^^^^^^^^^^^^^^^^^^
If any of your custom modules use a custom module utility, once you migrate to a collection you cannot address the module utility in the top level ``ansible.module_utils`` Python namespace. Ansible does not merge content from collections into the Ansible internal Python namespace. Update any Python import statements that refer to custom module utilities when you migrate your custom content to collections. See :ref:`module_utils in collections <collection_module_utils>` for more details.
When coding with ``module_utils`` in a collection, the Python import statement needs to take into account the :abbr:`FQCN (Fully Qualified Collection Name)` along with the ``ansible_collections`` convention. The resulting Python import looks similar to the following example:
.. code-block:: text
from ansible_collections.{namespace}.{collectionname}.plugins.module_utils.{util} import {something}
.. note::
You need to follow the same rules in changing paths and using namespaced names for subclassed plugins.
The following example code snippets show a Python and a PowerShell module using both default Ansible ``module_utils`` and those provided by a collection. In this example the namespace is ``ansible_example`` and the collection is ``community``.
In the Python example the ``module_utils`` is ``helper`` and the :abbr:`FQCN (Fully Qualified Collection Name)` is ``ansible_example.community.plugins.module_utils.helper``:
.. code-block:: text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.helper import HelperRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
_request = HelperRequest(
module,
headers={"Content-Type": "application/json"},
data=data
)
In the PowerShell example the ``module_utils`` is ``hyperv`` and the :abbr:`FQCN (Fully Qualified Collection Name)` is ``ansible_example.community.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _update_init_role:
Importing from __init__.py
^^^^^^^^^^^^^^^^^^^^^^^^^^
Because of the way that the CPython interpreter does imports, combined with the way the Ansible plugin loader works, if your custom embedded module or plugin requires importing something from an :file:`__init__.py` file, that also becomes part of your collection. You can either originate the content inside a standalone role or use the file name in the Python import statement. The following example is an :file:`__init__.py` file that is part of a callback plugin found inside a collection named ``ansible_example.community``.
.. code-block:: python
from ansible_collections.ansible_example.community.plugins.callback.__init__ import CustomBaseClass
Example: Migrating a standalone role with plugins to a collection
-----------------------------------------------------------------
In this example we have a standalone role called ``my-standalone-role.webapp`` to emulate a standalone role that contains dashes in the name (which is not valid in collections). This standalone role contains a custom module in the ``library/`` directory called ``manage_webserver``.
.. code-block:: bash
my-standalone-role.webapp
βββ defaults
βββ files
βββ handlers
βββ library
βββ meta
βββ tasks
βββ templates
βββ tests
βββ vars
1. Create a new collection, for example, ``acme.webserver``:
.. code-block:: bash
$ ansible-galaxy collection init acme.webserver
- Collection acme.webserver was created successfully
$ tree acme -d 1
acme
βββ webserver
βββ docs
βββ plugins
βββ roles
2. Create the ``webapp`` role inside the collection and copy all contents from the standalone role:
.. code-block:: bash
$ mkdir acme/webserver/roles/webapp
$ cp my-standalone-role.webapp/* acme/webserver/roles/webapp/
3. Move the ``manage_webserver`` module to its new home in ``acme/webserver/plugins/modules/``:
.. code-block:: bash
$ cp my-standalone-role.webapp/library/manage_webserver.py acme/webserver/plugins/modules/manage.py
.. note::
This example changed the original source file ``manage_webserver.py`` to the destination file ``manage.py``. This is optional but the :abbr:`FQCN (Fully Qualified Collection Name)` provides the ``webserver`` context as ``acme.webserver.manage``.
4. Change ``manage_webserver`` to ``acme.webserver.manage`` in :file:`tasks/` files in the role ( for example, ``my-standalone-role.webapp/tasks/main.yml``) and any use of the original module name.
.. note::
This name change is only required if you changed the original module name, but illustrates content referenced by :abbr:`FQCN (Fully Qualified Collection Name)` can offer context and in turn can make module and plugin names shorter. If you anticipate using these modules independent of the role, keep the original naming conventions. Users can add the :ref:`collections keyword <collections_using_playbook>` in their playbooks. Typically roles are an abstraction layer and users won't use components of the role independently.
Example: Supporting standalone roles and migrated collection roles in a downstream RPM
---------------------------------------------------------------------------------------
A standalone role can co-exist with its collection role counterpart (for example, as part of a support lifecycle of a product). This should only be done for a transition period, but these two can exist in downstream in packages such as RPMs. For example, the RHEL system roles could coexist with an `example of a RHEL system roles collection <https://github.com/maxamillion/collection-rhel-system-roles>`_ and provide existing backwards compatibility with the downstream RPM.
This section walks through an example creating this coexistence in a downstream RPM and requires Ansible 2.9.0 or later.
To deliver a role as both a standalone role and a collection role:
#. Place the collection in :file:`/usr/share/ansible/collections/ansible_collections/`.
#. Copy the contents of the role inside the collection into a directory named after the standalone role and place the standalone role in :file:`/usr/share/ansible/roles/`.
All previously bundled modules and plugins used in the standalone role are now referenced by :abbr:`FQCN (Fully Qualified Collection Name)` so even though they are no longer embedded, they can be found from the collection contents.This is an example of how the content inside the collection is a unique entity and does not have to be bound to a role or otherwise. You could alternately create two separate collections: one for the modules and plugins and another for the standalone role to migrate to. The role must use the modules and plugins as :abbr:`FQCN (Fully Qualified Collection Name)`.
The following is an example RPM spec file that accomplishes this using this example content:
.. code-block:: text
Name: acme-ansible-content
Summary: Ansible Collection for deploying and configuring ACME webapp
Version: 1.0.0
Release: 1%{?dist}
License: GPLv3+
Source0: acme-webserver-1.0.0.tar.gz
Url: https://github.com/acme/webserver-ansible-collection
BuildArch: noarch
%global roleprefix my-standalone-role.
%global collection_namespace acme
%global collection_name webserver
%global collection_dir %{_datadir}/ansible/collections/ansible_collections/%{collection_namespace}/%{collection_name}
%description
Ansible Collection and standalone role (for backward compatibility and migration) to deploy, configure, and manage the ACME webapp software.
%prep
%setup -qc
%build
%install
mkdir -p %{buildroot}/%{collection_dir}
cp -r ./* %{buildroot}/%{collection_dir}/
mkdir -p %{buildroot}/%{_datadir}/ansible/roles
for role in %{buildroot}/%{collection_dir}/roles/*
do
cp -pR ${role} %{buildroot}/%{_datadir}/ansible/roles/%{roleprefix}$(basename ${role})
mkdir -p %{buildroot}/%{_pkgdocdir}/$(basename ${role})
for docfile in README.md COPYING LICENSE
do
if [ -f ${role}/${docfile} ]
then
cp -p ${role}/${docfile} %{buildroot}/%{_pkgdocdir}/$(basename ${role})/${docfile}
fi
done
done
%files
%dir %{_datadir}/ansible
%dir %{_datadir}/ansible/roles
%dir %{_datadir}/ansible/collections
%dir %{_datadir}/ansible/collections/ansible_collections
%{_datadir}/ansible/roles/
%doc %{_pkgdocdir}/*/README.md
%doc %{_datadir}/ansible/roles/%{roleprefix}*/README.md
%{collection_dir}
%doc %{collection_dir}/roles/*/README.md
%license %{_pkgdocdir}/*/COPYING
%license %{_pkgdocdir}/*/LICENSE
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.