status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
timestamp[us, tz=UTC]
language
stringclasses
5 values
commit_datetime
timestamp[us, tz=UTC]
updated_file
stringlengths
4
188
file_content
stringlengths
0
5.12M
closed
ansible/ansible
https://github.com/ansible/ansible
74,279
filter module calls import crypt which fails on FIPS systems
### Summary When I use a core filter like `to_yaml`, then `plugins/filter/core.py` calls `import crypt`, which tries to load MD5 methods. In a FIPS-enabled system, this leads to an "operation not permitted" error. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible --version ansible 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] ``` ### Configuration ```console $ ansible-config dump --only-changed # there are no changes from the default ``` ### OS / Environment RHEL 7 host running image centos:7 on containerd ```shell % openssl version OpenSSL 1.0.2t-fips 10 Sep 2019 ``` ```shell % pip show cryptography Name: cryptography Version: 3.0 Summary: cryptography is a package which provides cryptographic recipes and primitives to Python developers. Home-page: https://github.com/pyca/cryptography Author: The cryptography developers Author-email: [email protected] License: BSD or Apache License, Version 2.0 Location: /usr/local/lib/python3.9/site-packages Requires: cffi, six Required-by: ansible-base ``` ### Steps to Reproduce ```yaml (paste below) # try_fips.yml - name: test fips gather_facts: no hosts: localhost connection: local vars: a: b: 1 c: 2 tasks: - debug: msg: "{{ a | to_yaml }}" ``` ```shell ansible-playbook -vvv try_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results I expected ansible to dump the simple data structure to yaml. This works if I comment out the `_add_method('MD5'...)` and `_add_method('CRYPT'...)` lines in `/usr/local/lib/python3.9/crypt.py`: ``` TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 ok: [localhost] => { "msg": "{b: 1, c: 2}\n" } META: ran handlers META: ran handlers PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console ansible-playbook -vvv /support/scripts/prove_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74279
https://github.com/ansible/ansible/pull/74304
d44eb03f49c89cc8a3e398d37ea35db572b354e7
4494ef3a9d0b0816e228a2b0cf8ebce9b732253a
2021-04-14T17:59:39Z
python
2021-04-20T15:47:34Z
changelogs/fragments/crypt_missing.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,279
filter module calls import crypt which fails on FIPS systems
### Summary When I use a core filter like `to_yaml`, then `plugins/filter/core.py` calls `import crypt`, which tries to load MD5 methods. In a FIPS-enabled system, this leads to an "operation not permitted" error. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible --version ansible 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] ``` ### Configuration ```console $ ansible-config dump --only-changed # there are no changes from the default ``` ### OS / Environment RHEL 7 host running image centos:7 on containerd ```shell % openssl version OpenSSL 1.0.2t-fips 10 Sep 2019 ``` ```shell % pip show cryptography Name: cryptography Version: 3.0 Summary: cryptography is a package which provides cryptographic recipes and primitives to Python developers. Home-page: https://github.com/pyca/cryptography Author: The cryptography developers Author-email: [email protected] License: BSD or Apache License, Version 2.0 Location: /usr/local/lib/python3.9/site-packages Requires: cffi, six Required-by: ansible-base ``` ### Steps to Reproduce ```yaml (paste below) # try_fips.yml - name: test fips gather_facts: no hosts: localhost connection: local vars: a: b: 1 c: 2 tasks: - debug: msg: "{{ a | to_yaml }}" ``` ```shell ansible-playbook -vvv try_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results I expected ansible to dump the simple data structure to yaml. This works if I comment out the `_add_method('MD5'...)` and `_add_method('CRYPT'...)` lines in `/usr/local/lib/python3.9/crypt.py`: ``` TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 ok: [localhost] => { "msg": "{b: 1, c: 2}\n" } META: ran handlers META: ran handlers PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console ansible-playbook -vvv /support/scripts/prove_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74279
https://github.com/ansible/ansible/pull/74304
d44eb03f49c89cc8a3e398d37ea35db572b354e7
4494ef3a9d0b0816e228a2b0cf8ebce9b732253a
2021-04-14T17:59:39Z
python
2021-04-20T15:47:34Z
lib/ansible/utils/encrypt.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import crypt import multiprocessing import random import re import string import sys from collections import namedtuple from ansible import constants as C from ansible.errors import AnsibleError, AnsibleAssertionError from ansible.module_utils.six import text_type from ansible.module_utils._text import to_text, to_bytes from ansible.utils.display import Display PASSLIB_AVAILABLE = False try: import passlib import passlib.hash from passlib.utils.handlers import HasRawSalt try: from passlib.utils.binary import bcrypt64 except ImportError: from passlib.utils import bcrypt64 PASSLIB_AVAILABLE = True except Exception: pass display = Display() __all__ = ['do_encrypt'] _LOCK = multiprocessing.Lock() DEFAULT_PASSWORD_LENGTH = 20 def random_password(length=DEFAULT_PASSWORD_LENGTH, chars=C.DEFAULT_PASSWORD_CHARS): '''Return a random password string of length containing only chars :kwarg length: The number of characters in the new password. Defaults to 20. :kwarg chars: The characters to choose from. The default is all ascii letters, ascii digits, and these symbols ``.,:-_`` ''' if not isinstance(chars, text_type): raise AnsibleAssertionError('%s (%s) is not a text_type' % (chars, type(chars))) random_generator = random.SystemRandom() return u''.join(random_generator.choice(chars) for dummy in range(length)) def random_salt(length=8): """Return a text string suitable for use as a salt for the hash functions we use to encrypt passwords. """ # Note passlib salt values must be pure ascii so we can't let the user # configure this salt_chars = string.ascii_letters + string.digits + u'./' return random_password(length=length, chars=salt_chars) class BaseHash(object): algo = namedtuple('algo', ['crypt_id', 'salt_size', 'implicit_rounds', 'salt_exact']) algorithms = { 'md5_crypt': algo(crypt_id='1', salt_size=8, implicit_rounds=None, salt_exact=False), 'bcrypt': algo(crypt_id='2a', salt_size=22, implicit_rounds=None, salt_exact=True), 'sha256_crypt': algo(crypt_id='5', salt_size=16, implicit_rounds=5000, salt_exact=False), 'sha512_crypt': algo(crypt_id='6', salt_size=16, implicit_rounds=5000, salt_exact=False), } def __init__(self, algorithm): self.algorithm = algorithm class CryptHash(BaseHash): def __init__(self, algorithm): super(CryptHash, self).__init__(algorithm) if sys.platform.startswith('darwin'): raise AnsibleError("crypt.crypt not supported on Mac OS X/Darwin, install passlib python module") if algorithm not in self.algorithms: raise AnsibleError("crypt.crypt does not support '%s' algorithm" % self.algorithm) self.algo_data = self.algorithms[algorithm] def hash(self, secret, salt=None, salt_size=None, rounds=None): salt = self._salt(salt, salt_size) rounds = self._rounds(rounds) return self._hash(secret, salt, rounds) def _salt(self, salt, salt_size): salt_size = salt_size or self.algo_data.salt_size ret = salt or random_salt(salt_size) if re.search(r'[^./0-9A-Za-z]', ret): raise AnsibleError("invalid characters in salt") if self.algo_data.salt_exact and len(ret) != self.algo_data.salt_size: raise AnsibleError("invalid salt size") elif not self.algo_data.salt_exact and len(ret) > self.algo_data.salt_size: raise AnsibleError("invalid salt size") return ret def _rounds(self, rounds): if rounds == self.algo_data.implicit_rounds: # Passlib does not include the rounds if it is the same as implicit_rounds. # Make crypt lib behave the same, by not explicitly specifying the rounds in that case. return None else: return rounds def _hash(self, secret, salt, rounds): if rounds is None: saltstring = "$%s$%s" % (self.algo_data.crypt_id, salt) else: saltstring = "$%s$rounds=%d$%s" % (self.algo_data.crypt_id, rounds, salt) # crypt.crypt on Python < 3.9 returns None if it cannot parse saltstring # On Python >= 3.9, it throws OSError. try: result = crypt.crypt(secret, saltstring) orig_exc = None except OSError as e: result = None orig_exc = e # None as result would be interpreted by the some modules (user module) # as no password at all. if not result: raise AnsibleError( "crypt.crypt does not support '%s' algorithm" % self.algorithm, orig_exc=orig_exc, ) return result class PasslibHash(BaseHash): def __init__(self, algorithm): super(PasslibHash, self).__init__(algorithm) if not PASSLIB_AVAILABLE: raise AnsibleError("passlib must be installed to hash with '%s'" % algorithm) try: self.crypt_algo = getattr(passlib.hash, algorithm) except Exception: raise AnsibleError("passlib does not support '%s' algorithm" % algorithm) def hash(self, secret, salt=None, salt_size=None, rounds=None): salt = self._clean_salt(salt) rounds = self._clean_rounds(rounds) return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) def _clean_salt(self, salt): if not salt: return None elif issubclass(self.crypt_algo, HasRawSalt): ret = to_bytes(salt, encoding='ascii', errors='strict') else: ret = to_text(salt, encoding='ascii', errors='strict') # Ensure the salt has the correct padding if self.algorithm == 'bcrypt': ret = bcrypt64.repair_unused(ret) return ret def _clean_rounds(self, rounds): algo_data = self.algorithms.get(self.algorithm) if rounds: return rounds elif algo_data and algo_data.implicit_rounds: # The default rounds used by passlib depend on the passlib version. # For consistency ensure that passlib behaves the same as crypt in case no rounds were specified. # Thus use the crypt defaults. return algo_data.implicit_rounds else: return None def _hash(self, secret, salt, salt_size, rounds): # Not every hash algorithm supports every parameter. # Thus create the settings dict only with set parameters. settings = {} if salt: settings['salt'] = salt if salt_size: settings['salt_size'] = salt_size if rounds: settings['rounds'] = rounds # starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt' if hasattr(self.crypt_algo, 'hash'): result = self.crypt_algo.using(**settings).hash(secret) elif hasattr(self.crypt_algo, 'encrypt'): result = self.crypt_algo.encrypt(secret, **settings) else: raise AnsibleError("installed passlib version %s not supported" % passlib.__version__) # passlib.hash should always return something or raise an exception. # Still ensure that there is always a result. # Otherwise an empty password might be assumed by some modules, like the user module. if not result: raise AnsibleError("failed to hash with algorithm '%s'" % self.algorithm) # Hashes from passlib.hash should be represented as ascii strings of hex # digits so this should not traceback. If it's not representable as such # we need to traceback and then blacklist such algorithms because it may # impact calling code. return to_text(result, errors='strict') def passlib_or_crypt(secret, algorithm, salt=None, salt_size=None, rounds=None): if PASSLIB_AVAILABLE: return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) else: return CryptHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) def do_encrypt(result, encrypt, salt_size=None, salt=None): return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt)
closed
ansible/ansible
https://github.com/ansible/ansible
74,279
filter module calls import crypt which fails on FIPS systems
### Summary When I use a core filter like `to_yaml`, then `plugins/filter/core.py` calls `import crypt`, which tries to load MD5 methods. In a FIPS-enabled system, this leads to an "operation not permitted" error. ### Issue Type Bug Report ### Component Name ansible ### Ansible Version ```console ansible --version ansible 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] ``` ### Configuration ```console $ ansible-config dump --only-changed # there are no changes from the default ``` ### OS / Environment RHEL 7 host running image centos:7 on containerd ```shell % openssl version OpenSSL 1.0.2t-fips 10 Sep 2019 ``` ```shell % pip show cryptography Name: cryptography Version: 3.0 Summary: cryptography is a package which provides cryptographic recipes and primitives to Python developers. Home-page: https://github.com/pyca/cryptography Author: The cryptography developers Author-email: [email protected] License: BSD or Apache License, Version 2.0 Location: /usr/local/lib/python3.9/site-packages Requires: cffi, six Required-by: ansible-base ``` ### Steps to Reproduce ```yaml (paste below) # try_fips.yml - name: test fips gather_facts: no hosts: localhost connection: local vars: a: b: 1 c: 2 tasks: - debug: msg: "{{ a | to_yaml }}" ``` ```shell ansible-playbook -vvv try_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Expected Results I expected ansible to dump the simple data structure to yaml. This works if I comment out the `_add_method('MD5'...)` and `_add_method('CRYPT'...)` lines in `/usr/local/lib/python3.9/crypt.py`: ``` TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 ok: [localhost] => { "msg": "{b: 1, c: 2}\n" } META: ran handlers META: ran handlers PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Actual Results ```console ansible-playbook -vvv /support/scripts/prove_fips.yml ansible-playbook 2.10.8 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.4 (default, Apr 14 2021, 12:55:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] No config file found; using defaults host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Skipping due to inventory source not existing or not being readable by the current user toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: prove_fips.yml ************************************************************************************************************************************************************* 1 plays in /support/scripts/prove_fips.yml PLAY [test fips] ********************************************************************************************************************************************************************* META: ran handlers TASK [debug] ************************************************************************************************************************************************************************* task path: /support/scripts/prove_fips.yml:12 [WARNING]: Skipping plugin (/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: [Errno 1] Operation not permitted [WARNING]: an unexpected error occurred during Jinja2 environment setup: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) """Wrapper to the POSIX crypt library call and associated functionality.""" import sys as _sys try: import _crypt except ModuleNotFoundError: if _sys.platform == 'win32': raise ImportError("The crypt module is not supported on Windows") else: raise ImportError("The required _crypt module was not built as part of CPython") import errno import string as _string from random import SystemRandom as _SystemRandom from collections import namedtuple as _namedtuple _saltchars = _string.ascii_letters + _string.digits + './' _sr = _SystemRandom() class _Method(_namedtuple('_Method', 'name ident salt_chars total_size')): """Class representing a salt method per the Modular Crypt Format or the legacy 2-character crypt method.""" def __repr__(self): return '<crypt.METHOD_{}>'.format(self.name) def mksalt(method=None, *, rounds=None): """Generate a salt for the specified method. If not specified, the strongest available method will be used. """ if method is None: method = methods[0] if rounds is not None and not isinstance(rounds, int): raise TypeError(f'{rounds.__class__.__name__} object cannot be ' f'interpreted as an integer') if not method.ident: # traditional s = '' else: # modular s = f'${method.ident}$' if method.ident and method.ident[0] == '2': # Blowfish variants if rounds is None: "/usr/local/lib/python3.9/crypt.py" 120L, 3819C prepended. File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 exception during Jinja2 environment setup: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 504, in __getitem__ plugin_impl = self._pluginloader.get(module_name) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 993, in get return super(Jinja2Loader, self).get(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 792, in get return self.get_with_context(name, *args, **kwargs).object File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 812, in get_with_context self._module_cache[path] = self._load_module_source(name, path) File "/usr/local/lib/python3.9/site-packages/ansible/plugins/loader.py", line 776, in _load_module_source spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/ansible/plugins/filter/core.py", line 23, in <module> import crypt File "/usr/local/lib/python3.9/crypt.py", line 117, in <module> _add_method('MD5', '1', 8, 34) File "/usr/local/lib/python3.9/crypt.py", line 94, in _add_method result = crypt('', salt) File "/usr/local/lib/python3.9/crypt.py", line 82, in crypt return _crypt.crypt(word, salt) PermissionError: [Errno 1] Operation not permitted During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/ansible/template/__init__.py", line 506, in __getitem__ raise TemplateSyntaxError(to_native(e), 0) jinja2.exceptions.TemplateSyntaxError: [Errno 1] Operation not permitted line 0 fatal: [localhost]: FAILED! => { "msg": "template error while templating string: [Errno 1] Operation not permitted\n line 0. String: {{ a | to_yaml }}" } PLAY RECAP *************************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74279
https://github.com/ansible/ansible/pull/74304
d44eb03f49c89cc8a3e398d37ea35db572b354e7
4494ef3a9d0b0816e228a2b0cf8ebce9b732253a
2021-04-14T17:59:39Z
python
2021-04-20T15:47:34Z
test/units/utils/test_encrypt.py
# (c) 2018, Matthias Fuchs <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys import pytest from ansible.errors import AnsibleError, AnsibleFilterError from ansible.plugins.filter.core import get_encrypted_password from ansible.utils import encrypt class passlib_off(object): def __init__(self): self.orig = encrypt.PASSLIB_AVAILABLE def __enter__(self): encrypt.PASSLIB_AVAILABLE = False return self def __exit__(self, exception_type, exception_value, traceback): encrypt.PASSLIB_AVAILABLE = self.orig def assert_hash(expected, secret, algorithm, **settings): if encrypt.PASSLIB_AVAILABLE: assert encrypt.passlib_or_crypt(secret, algorithm, **settings) == expected assert encrypt.PasslibHash(algorithm).hash(secret, **settings) == expected else: assert encrypt.passlib_or_crypt(secret, algorithm, **settings) == expected with pytest.raises(AnsibleError) as excinfo: encrypt.PasslibHash(algorithm).hash(secret, **settings) assert excinfo.value.args[0] == "passlib must be installed to hash with '%s'" % algorithm @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_encrypt_with_rounds_no_passlib(): with passlib_off(): assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=5000) assert_hash("$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=10000) assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678", rounds=5000) # If passlib is not installed. this is identical to the test_encrypt_with_rounds_no_passlib() test @pytest.mark.skipif(not encrypt.PASSLIB_AVAILABLE, reason='passlib must be installed to run this test') def test_encrypt_with_rounds(): assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=5000) assert_hash("$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=10000) assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678", rounds=5000) @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_encrypt_default_rounds_no_passlib(): with passlib_off(): assert_hash("$1$12345678$tRy4cXc3kmcfRZVj4iFXr/", secret="123", algorithm="md5_crypt", salt="12345678") assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678") assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678") assert encrypt.CryptHash("md5_crypt").hash("123") # If passlib is not installed. this is identical to the test_encrypt_default_rounds_no_passlib() test @pytest.mark.skipif(not encrypt.PASSLIB_AVAILABLE, reason='passlib must be installed to run this test') def test_encrypt_default_rounds(): assert_hash("$1$12345678$tRy4cXc3kmcfRZVj4iFXr/", secret="123", algorithm="md5_crypt", salt="12345678") assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678") assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678") assert encrypt.PasslibHash("md5_crypt").hash("123") @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_password_hash_filter_no_passlib(): with passlib_off(): assert not encrypt.PASSLIB_AVAILABLE assert get_encrypted_password("123", "md5", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" with pytest.raises(AnsibleFilterError): get_encrypted_password("123", "crypt16", salt="12") def test_password_hash_filter_passlib(): if not encrypt.PASSLIB_AVAILABLE: pytest.skip("passlib not available") with pytest.raises(AnsibleFilterError): get_encrypted_password("123", "sha257", salt="12345678") # Uses 5000 rounds by default for sha256 matching crypt behaviour assert get_encrypted_password("123", "sha256", salt="12345678") == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert get_encrypted_password("123", "sha256", salt="12345678", rounds=5000) == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert (get_encrypted_password("123", "sha256", salt="12345678", rounds=10000) == "$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/") assert (get_encrypted_password("123", "sha512", salt="12345678", rounds=6000) == "$6$rounds=6000$12345678$l/fC67BdJwZrJ7qneKGP1b6PcatfBr0dI7W6JLBrsv8P1wnv/0pu4WJsWq5p6WiXgZ2gt9Aoir3MeORJxg4.Z/") assert (get_encrypted_password("123", "sha512", salt="12345678", rounds=5000) == "$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.") assert get_encrypted_password("123", "crypt16", salt="12") == "12pELHK2ME3McUFlHxel6uMM" # Try algorithm that uses a raw salt assert get_encrypted_password("123", "pbkdf2_sha256") @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_do_encrypt_no_passlib(): with passlib_off(): assert not encrypt.PASSLIB_AVAILABLE assert encrypt.do_encrypt("123", "md5_crypt", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" with pytest.raises(AnsibleError): encrypt.do_encrypt("123", "crypt16", salt="12") def test_do_encrypt_passlib(): if not encrypt.PASSLIB_AVAILABLE: pytest.skip("passlib not available") with pytest.raises(AnsibleError): encrypt.do_encrypt("123", "sha257_crypt", salt="12345678") # Uses 5000 rounds by default for sha256 matching crypt behaviour. assert encrypt.do_encrypt("123", "sha256_crypt", salt="12345678") == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert encrypt.do_encrypt("123", "md5_crypt", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" assert encrypt.do_encrypt("123", "crypt16", salt="12") == "12pELHK2ME3McUFlHxel6uMM" def test_random_salt(): res = encrypt.random_salt() expected_salt_candidate_chars = u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789./' assert len(res) == 8 for res_char in res: assert res_char in expected_salt_candidate_chars def test_invalid_crypt_salt(): pytest.raises( AnsibleError, encrypt.CryptHash('bcrypt')._salt, '_', None ) encrypt.CryptHash('bcrypt')._salt('1234567890123456789012', None) pytest.raises( AnsibleError, encrypt.CryptHash('bcrypt')._salt, 'kljsdf', None ) encrypt.CryptHash('sha256_crypt')._salt('123456', None) pytest.raises( AnsibleError, encrypt.CryptHash('sha256_crypt')._salt, '1234567890123456789012', None ) def test_passlib_bcrypt_salt(recwarn): passlib_exc = pytest.importorskip("passlib.exc") secret = 'foo' salt = '1234567890123456789012' repaired_salt = '123456789012345678901u' expected = '$2b$12$123456789012345678901uMv44x.2qmQeefEGb3bcIRc1mLuO7bqa' p = encrypt.PasslibHash('bcrypt') result = p.hash(secret, salt=salt) passlib_warnings = [w.message for w in recwarn if isinstance(w.message, passlib_exc.PasslibHashWarning)] assert len(passlib_warnings) == 0 assert result == expected recwarn.clear() result = p.hash(secret, salt=repaired_salt) assert result == expected
closed
ansible/ansible
https://github.com/ansible/ansible
74,313
The playbooks_templating page does not have a pointer to a Jinja2 document.
### Summary There is no pointer for Jinja2 Docs at https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html, Would you please add pointer to Jinja2 Docs (https://jinja.palletsprojects.com/en/2.11.x/) at 'see also' section ? ### Issue Type Documentation Report ### Component Name user_guide/playbooks_templating ### Ansible Version ```console Not relevant ``` ### Configuration ```console Not relevant ``` ### OS / Environment Not relevant ### Additional Information The Jinja2 syntax provided on ansible.com is not a complete reference for creating playbooks and templates. So if these documents provide a pointer to Jinja2Doc about Jinja2Synta, it should really help. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74313
https://github.com/ansible/ansible/pull/74348
49d4442378636128d6f93e7905dfe8fe06006b9b
9ed0e37e536bcf39cae9ed3ee6007746f7391cb4
2021-04-16T10:47:14Z
python
2021-04-20T17:14:08Z
docs/docsite/rst/user_guide/playbooks_templating.rst
.. _playbooks_templating: ******************* Templating (Jinja2) ******************* Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible includes a lot of specialized filters and tests for templating. You can use all the :ref:`standard filters and tests <jinja2:builtin-filters>` included in Jinja2 as well. Ansible also offers a new plugin type: :ref:`lookup_plugins`. All templating happens on the Ansible controller **before** the task is sent and executed on the target machine. This approach minimizes the package requirements on the target (jinja2 is only required on the controller). It also limits the amount of data Ansible passes to the target machine. Ansible parses templates on the controller and passes only the information needed for each task to the target machine, instead of passing all the data on the controller and parsing it on the target. .. contents:: :local: .. toctree:: :maxdepth: 2 playbooks_filters playbooks_tests playbooks_lookups playbooks_python_version .. _templating_now: Get the current time ==================== .. versionadded:: 2.8 The ``now()`` Jinja2 function retrieves a Python datetime object or a string representation for the current time. The ``now()`` function supports 2 arguments: utc Specify ``True`` to get the current time in UTC. Defaults to ``False``. fmt Accepts a `strftime <https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior>`_ string that returns a formatted date time string. .. seealso:: :ref:`playbooks_intro` An introduction to playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_loops` Looping in playbooks :ref:`playbooks_reuse_roles` Playbook organization by roles :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
docs/docsite/rst/porting_guides/porting_guide_core_2.11.rst
.. _porting_2.11_guide_core: ******************************* Ansible-core 2.11 Porting Guide ******************************* This section discusses the behavioral changes between ``ansible-base`` 2.10 and ``ansible-core`` 2.11. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they work with this version of ``ansible-core``. We suggest you read this page along with the `ansible-core Changelog for 2.11 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.11.rst>`_ to understand what updates you may need to make. ``ansible-core`` is mainly of interest for developers and users who only want to use a small, controlled subset of the available collections. Regular users should install Ansible. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Playbook ======== * The ``jinja2_native`` setting now does not affect the template module which implicitly returns strings. For the template lookup there is a new argument ``jinja2_native`` (off by default) to control that functionality. The rest of the Jinja2 expressions still operate based on the ``jinja2_native`` setting. Command Line ============ * The ``ansible-galaxy login`` command has been removed, as the underlying API it used for GitHub auth has been shut down. Publishing roles or collections to Galaxy with ``ansible-galaxy`` now requires that a Galaxy API token be passed to the CLI using a token file (default location ``~/.ansible/galaxy_token``) or (insecurely) with the ``--token`` argument to ``ansible-galaxy``. Other: ====== * **Upgrading**: If upgrading from ``ansible < 2.10`` or from ``ansible-base`` and using pip, you must ``pip uninstall ansible`` or ``pip uninstall ansible-base`` before installing ``ansible-core`` to avoid conflicts. * Python 3.8 on the controller node is a soft requirement for this release. ``ansible-core`` 2.11 still works with the same versions of Python that ``ansible-base`` 2.10 worked with, however 2.11 emits a warning when running on a controller node with a Python version less than 3.8. This warning can be disabled by setting ``ANSIBLE_CONTROLLER_PYTHON_WARNING=False`` in your environment. ``ansible-core`` 2.12 will require Python 3.8 or greater. * The configuration system now validates the ``choices`` field, so any settings that violate it and were ignored in 2.10 cause an error in 2.11. For example, `ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH=0` now causes an error (valid choices are ``ignore``, ``warn`` or ``error``). * The ``ansible-galaxy`` command now uses ``resolvelib`` for resolving dependencies. In most cases this should not make a user-facing difference beyond being more performant, but we note it here for posterity and completeness. * If you import Python ``module_utils`` into any modules you maintain, you may now mark the import as optional during the module payload build by wrapping the ``import`` statement in a ``try`` or ``if`` block. This allows modules to use ``module_utils`` that may not be present in all versions of Ansible or a collection, and to perform arbitrary recovery or fallback actions during module runtime. Deprecated ========== No notable changes Modules ======= * The ``apt_key`` module has explicitly defined ``file`` as mutually exclusive with ``data``, ``keyserver`` and ``url``. They cannot be used together anymore. * The ``meta`` module now supports tags for user-defined tasks. Set the task's tags to 'always' to maintain the previous behavior. Internal ``meta`` tasks continue to always run. Modules removed --------------- The following modules no longer exist: * No notable changes Deprecation notices ------------------- No notable changes Noteworthy module changes ------------------------- * facts - On NetBSD, ``ansible_virtualization_type`` now tries to report a more accurate result than ``xen`` when virtualized and not running on Xen. * facts - Virtualization facts now include ``virtualization_tech_guest`` and ``virtualization_tech_host`` keys. These are lists of virtualization technologies that a guest is a part of, or that a host provides, respectively. As an example, if you set up a host to provide both KVM and VirtualBox, both values are included in ``virtualization_tech_host``. Similarly, a podman container running on a VM powered by KVM has a ``virtualization_tech_guest`` of ``["kvm", "podman", "container"]``. * The parameter ``filter`` type is changed from ``string`` to ``list`` in the :ref:`setup <setup_module>` module in order to use more than one filter. Previous behaviour (using a ``string``) still remains and works as a single filter. Plugins ======= * inventory plugins - ``CachePluginAdjudicator.flush()`` now calls the underlying cache plugin's ``flush()`` instead of only deleting keys that it knows about. Inventory plugins should use ``delete()`` to remove any specific keys. As a user, this means that when an inventory plugin calls its ``clear_cache()`` method, facts could also be flushed from the cache. To work around this, users can configure inventory plugins to use a cache backend that is independent of the facts cache. * callback plugins - ``meta`` task execution is now sent to ``v2_playbook_on_task_start`` like any other task. By default, only explicit meta tasks are sent there. Callback plugins can opt-in to receiving internal, implicitly created tasks to act on those as well, as noted in the plugin development documentation. * The ``choices`` are now validated, so plugins that were using incorrect or incomplete choices issue an error in 2.11 if the value provided does not match. This has a simple fix: update the entries in ``choices`` to match reality. Porting custom scripts ====================== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
docs/docsite/rst/reference_appendices/module_utils.rst
.. _ansible.module_utils: .. _module_utils: *************************************************************** Ansible Reference: Module Utilities *************************************************************** This page documents utilities intended to be helpful when writing Ansible modules in Python. AnsibleModule -------------- To use this functionality, include ``from ansible.module_utils.basic import AnsibleModule`` in your module. .. autoclass:: ansible.module_utils.basic.AnsibleModule :members: :noindex: Basic ------ To use this functionality, include ``import ansible.module_utils.basic`` in your module. .. automodule:: ansible.module_utils.basic :members:
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
lib/ansible/module_utils/common/arg_spec.py
# -*- coding: utf-8 -*- # Copyright (c) 2021 Ansible Project # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type from copy import deepcopy from ansible.module_utils.common.parameters import ( _ADDITIONAL_CHECKS, _get_legal_inputs, _get_unsupported_parameters, _handle_aliases, _list_no_log_values, _set_defaults, _validate_argument_types, _validate_argument_values, _validate_sub_spec, set_fallbacks, ) from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.warnings import deprecate, warn from ansible.module_utils.common.validation import ( check_mutually_exclusive, check_required_arguments, check_required_by, check_required_if, check_required_one_of, check_required_together, ) from ansible.module_utils.errors import ( AliasError, AnsibleValidationErrorMultiple, MutuallyExclusiveError, NoLogError, RequiredByError, RequiredDefaultError, RequiredError, RequiredIfError, RequiredOneOfError, RequiredTogetherError, UnsupportedError, ) class ValidationResult: """Result of argument spec validation. :param parameters: Terms to be validated and coerced to the correct type. :type parameters: dict """ def __init__(self, parameters): self._no_log_values = set() self._unsupported_parameters = set() self._validated_parameters = deepcopy(parameters) self._deprecations = [] self._warnings = [] self.errors = AnsibleValidationErrorMultiple() @property def validated_parameters(self): return self._validated_parameters @property def unsupported_parameters(self): return self._unsupported_parameters @property def error_messages(self): return self.errors.messages class ArgumentSpecValidator: """Argument spec validation class Creates a validator based on the ``argument_spec`` that can be used to validate a number of parameters using the ``validate()`` method. :param argument_spec: Specification of valid parameters and their type. May include nested argument specs. :type argument_spec: dict :param mutually_exclusive: List or list of lists of terms that should not be provided together. :type mutually_exclusive: list, optional :param required_together: List of lists of terms that are required together. :type required_together: list, optional :param required_one_of: List of lists of terms, one of which in each list is required. :type required_one_of: list, optional :param required_if: List of lists of ``[parameter, value, [parameters]]`` where one of [parameters] is required if ``parameter`` == ``value``. :type required_if: list, optional :param required_by: Dictionary of parameter names that contain a list of parameters required by each key in the dictionary. :type required_by: dict, optional """ def __init__(self, argument_spec, mutually_exclusive=None, required_together=None, required_one_of=None, required_if=None, required_by=None, ): self._mutually_exclusive = mutually_exclusive self._required_together = required_together self._required_one_of = required_one_of self._required_if = required_if self._required_by = required_by self._valid_parameter_names = set() self.argument_spec = argument_spec for key in sorted(self.argument_spec.keys()): aliases = self.argument_spec[key].get('aliases') if aliases: self._valid_parameter_names.update(["{key} ({aliases})".format(key=key, aliases=", ".join(sorted(aliases)))]) else: self._valid_parameter_names.update([key]) def validate(self, parameters, *args, **kwargs): """Validate module parameters against argument spec. Returns a ValidationResult object. Error messages in the ValidationResult may contain no_log values and should be sanitized before logging or displaying. :Example: validator = ArgumentSpecValidator(argument_spec) result = validator.validate(parameters) if result.error_messages: sys.exit("Validation failed: {0}".format(", ".join(result.error_messages)) valid_params = result.validated_parameters :param argument_spec: Specification of parameters, type, and valid values :type argument_spec: dict :param parameters: Parameters provided to the role :type parameters: dict :return: Object containing validated parameters. :rtype: ValidationResult """ result = ValidationResult(parameters) result._no_log_values.update(set_fallbacks(self.argument_spec, result._validated_parameters)) alias_warnings = [] alias_deprecations = [] try: aliases = _handle_aliases(self.argument_spec, result._validated_parameters, alias_warnings, alias_deprecations) except (TypeError, ValueError) as e: aliases = {} result.errors.append(AliasError(to_native(e))) legal_inputs = _get_legal_inputs(self.argument_spec, result._validated_parameters, aliases) for option, alias in alias_warnings: result._warnings.append({'option': option, 'alias': alias}) for deprecation in alias_deprecations: result._deprecations.append({ 'name': deprecation['name'], 'version': deprecation.get('version'), 'date': deprecation.get('date'), 'collection_name': deprecation.get('collection_name'), }) try: result._no_log_values.update(_list_no_log_values(self.argument_spec, result._validated_parameters)) except TypeError as te: result.errors.append(NoLogError(to_native(te))) try: result._unsupported_parameters.update(_get_unsupported_parameters(self.argument_spec, result._validated_parameters, legal_inputs)) except TypeError as te: result.errors.append(RequiredDefaultError(to_native(te))) except ValueError as ve: result.errors.append(AliasError(to_native(ve))) try: check_mutually_exclusive(self._mutually_exclusive, result._validated_parameters) except TypeError as te: result.errors.append(MutuallyExclusiveError(to_native(te))) result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters, False)) try: check_required_arguments(self.argument_spec, result._validated_parameters) except TypeError as e: result.errors.append(RequiredError(to_native(e))) _validate_argument_types(self.argument_spec, result._validated_parameters, errors=result.errors) _validate_argument_values(self.argument_spec, result._validated_parameters, errors=result.errors) for check in _ADDITIONAL_CHECKS: try: check['func'](getattr(self, "_{attr}".format(attr=check['attr'])), result._validated_parameters) except TypeError as te: result.errors.append(check['err'](to_native(te))) result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters)) _validate_sub_spec(self.argument_spec, result._validated_parameters, errors=result.errors, no_log_values=result._no_log_values, unsupported_parameters=result._unsupported_parameters) if result._unsupported_parameters: flattened_names = [] for item in result._unsupported_parameters: if isinstance(item, tuple): flattened_names.append(".".join(item)) else: flattened_names.append(item) unsupported_string = ", ".join(sorted(list(flattened_names))) supported_string = ", ".join(self._valid_parameter_names) result.errors.append( UnsupportedError("{0}. Supported parameters include: {1}.".format(unsupported_string, supported_string))) return result class ModuleArgumentSpecValidator(ArgumentSpecValidator): def __init__(self, *args, **kwargs): super(ModuleArgumentSpecValidator, self).__init__(*args, **kwargs) def validate(self, parameters): result = super(ModuleArgumentSpecValidator, self).validate(parameters) for d in result._deprecations: deprecate("Alias '{name}' is deprecated. See the module docs for more information".format(name=d['name']), version=d.get('version'), date=d.get('date'), collection_name=d.get('collection_name')) for w in result._warnings: warn('Both option {option} and its alias {alias} are set.'.format(option=w['option'], alias=w['alias'])) return result
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
lib/ansible/module_utils/common/parameters.py
# -*- coding: utf-8 -*- # Copyright (c) 2019 Ansible Project # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type import datetime import os from collections import deque from itertools import chain from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.formatters import lenient_lowercase from ansible.module_utils.common.warnings import warn from ansible.module_utils.errors import ( AliasError, AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, ArgumentTypeError, ArgumentValueError, ElementError, MutuallyExclusiveError, NoLogError, RequiredByError, RequiredError, RequiredIfError, RequiredOneOfError, RequiredTogetherError, SubParameterTypeError, ) from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE from ansible.module_utils.common._collections_compat import ( KeysView, Set, Sequence, Mapping, MutableMapping, MutableSet, MutableSequence, ) from ansible.module_utils.six import ( binary_type, integer_types, string_types, text_type, PY2, PY3, ) from ansible.module_utils.common.validation import ( check_mutually_exclusive, check_required_arguments, check_required_together, check_required_one_of, check_required_if, check_required_by, check_type_bits, check_type_bool, check_type_bytes, check_type_dict, check_type_float, check_type_int, check_type_jsonarg, check_type_list, check_type_path, check_type_raw, check_type_str, ) # Python2 & 3 way to get NoneType NoneType = type(None) _ADDITIONAL_CHECKS = ( {'func': check_required_together, 'attr': 'required_together', 'err': RequiredTogetherError}, {'func': check_required_one_of, 'attr': 'required_one_of', 'err': RequiredOneOfError}, {'func': check_required_if, 'attr': 'required_if', 'err': RequiredIfError}, {'func': check_required_by, 'attr': 'required_by', 'err': RequiredByError}, ) # if adding boolean attribute, also add to PASS_BOOL # some of this dupes defaults from controller config PASS_VARS = { 'check_mode': ('check_mode', False), 'debug': ('_debug', False), 'diff': ('_diff', False), 'keep_remote_files': ('_keep_remote_files', False), 'module_name': ('_name', None), 'no_log': ('no_log', False), 'remote_tmp': ('_remote_tmp', None), 'selinux_special_fs': ('_selinux_special_fs', ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat']), 'shell_executable': ('_shell', '/bin/sh'), 'socket': ('_socket_path', None), 'string_conversion_action': ('_string_conversion_action', 'warn'), 'syslog_facility': ('_syslog_facility', 'INFO'), 'tmpdir': ('_tmpdir', None), 'verbosity': ('_verbosity', 0), 'version': ('ansible_version', '0.0'), } PASS_BOOLS = ('check_mode', 'debug', 'diff', 'keep_remote_files', 'no_log') DEFAULT_TYPE_VALIDATORS = { 'str': check_type_str, 'list': check_type_list, 'dict': check_type_dict, 'bool': check_type_bool, 'int': check_type_int, 'float': check_type_float, 'path': check_type_path, 'raw': check_type_raw, 'jsonarg': check_type_jsonarg, 'json': check_type_jsonarg, 'bytes': check_type_bytes, 'bits': check_type_bits, } def _get_type_validator(wanted): """Returns the callable used to validate a wanted type and the type name. :arg wanted: String or callable. If a string, get the corresponding validation function from DEFAULT_TYPE_VALIDATORS. If callable, get the name of the custom callable and return that for the type_checker. :returns: Tuple of callable function or None, and a string that is the name of the wanted type. """ # Use one our our builtin validators. if not callable(wanted): if wanted is None: # Default type for parameters wanted = 'str' type_checker = DEFAULT_TYPE_VALIDATORS.get(wanted) # Use the custom callable for validation. else: type_checker = wanted wanted = getattr(wanted, '__name__', to_native(type(wanted))) return type_checker, wanted def _get_legal_inputs(argument_spec, parameters, aliases=None): if aliases is None: aliases = _handle_aliases(argument_spec, parameters) return list(aliases.keys()) + list(argument_spec.keys()) def _get_unsupported_parameters(argument_spec, parameters, legal_inputs=None, options_context=None): """Check keys in parameters against those provided in legal_inputs to ensure they contain legal values. If legal_inputs are not supplied, they will be generated using the argument_spec. :arg argument_spec: Dictionary of parameters, their type, and valid values. :arg parameters: Dictionary of parameters. :arg legal_inputs: List of valid key names property names. Overrides values in argument_spec. :arg options_context: List of parent keys for tracking the context of where a parameter is defined. :returns: Set of unsupported parameters. Empty set if no unsupported parameters are found. """ if legal_inputs is None: legal_inputs = _get_legal_inputs(argument_spec, parameters) unsupported_parameters = set() for k in parameters.keys(): if k not in legal_inputs: context = k if options_context: context = tuple(options_context + [k]) unsupported_parameters.add(context) return unsupported_parameters def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprecations=None): """Process aliases from an argument_spec including warnings and deprecations. Modify ``parameters`` by adding a new key for each alias with the supplied value from ``parameters``. If a list is provided to the alias_warnings parameter, it will be filled with tuples (option, alias) in every case where both an option and its alias are specified. If a list is provided to alias_deprecations, it will be populated with dictionaries, each containing deprecation information for each alias found in argument_spec. :param argument_spec: Dictionary of parameters, their type, and valid values. :type argument_spec: dict :param parameters: Dictionary of parameters. :type parameters: dict :param alias_warnings: :type alias_warnings: list :param alias_deprecations: :type alias_deprecations: list """ aliases_results = {} # alias:canon for (k, v) in argument_spec.items(): aliases = v.get('aliases', None) default = v.get('default', None) required = v.get('required', False) if alias_deprecations is not None: for alias in argument_spec[k].get('deprecated_aliases', []): if alias.get('name') in parameters: alias_deprecations.append(alias) if default is not None and required: # not alias specific but this is a good place to check this raise ValueError("internal error: required and default are mutually exclusive for %s" % k) if aliases is None: continue if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)): raise TypeError('internal error: aliases must be a list or tuple') for alias in aliases: aliases_results[alias] = k if alias in parameters: if k in parameters and alias_warnings is not None: alias_warnings.append((k, alias)) parameters[k] = parameters[alias] return aliases_results def _list_deprecations(argument_spec, parameters, prefix=''): """Return a list of deprecations :arg argument_spec: An argument spec dictionary :arg parameters: Dictionary of parameters :returns: List of dictionaries containing a message and version in which the deprecated parameter will be removed, or an empty list:: [{'msg': "Param 'deptest' is deprecated. See the module docs for more information", 'version': '2.9'}] """ deprecations = [] for arg_name, arg_opts in argument_spec.items(): if arg_name in parameters: if prefix: sub_prefix = '%s["%s"]' % (prefix, arg_name) else: sub_prefix = arg_name if arg_opts.get('removed_at_date') is not None: deprecations.append({ 'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix, 'date': arg_opts.get('removed_at_date'), 'collection_name': arg_opts.get('removed_from_collection'), }) elif arg_opts.get('removed_in_version') is not None: deprecations.append({ 'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix, 'version': arg_opts.get('removed_in_version'), 'collection_name': arg_opts.get('removed_from_collection'), }) # Check sub-argument spec sub_argument_spec = arg_opts.get('options') if sub_argument_spec is not None: sub_arguments = parameters[arg_name] if isinstance(sub_arguments, Mapping): sub_arguments = [sub_arguments] if isinstance(sub_arguments, list): for sub_params in sub_arguments: if isinstance(sub_params, Mapping): deprecations.extend(_list_deprecations(sub_argument_spec, sub_params, prefix=sub_prefix)) return deprecations def _list_no_log_values(argument_spec, params): """Return set of no log values :arg argument_spec: An argument spec dictionary :arg params: Dictionary of all parameters :returns: Set of strings that should be hidden from output:: {'secret_dict_value', 'secret_list_item_one', 'secret_list_item_two', 'secret_string'} """ no_log_values = set() for arg_name, arg_opts in argument_spec.items(): if arg_opts.get('no_log', False): # Find the value for the no_log'd param no_log_object = params.get(arg_name, None) if no_log_object: try: no_log_values.update(_return_datastructure_name(no_log_object)) except TypeError as e: raise TypeError('Failed to convert "%s": %s' % (arg_name, to_native(e))) # Get no_log values from suboptions sub_argument_spec = arg_opts.get('options') if sub_argument_spec is not None: wanted_type = arg_opts.get('type') sub_parameters = params.get(arg_name) if sub_parameters is not None: if wanted_type == 'dict' or (wanted_type == 'list' and arg_opts.get('elements', '') == 'dict'): # Sub parameters can be a dict or list of dicts. Ensure parameters are always a list. if not isinstance(sub_parameters, list): sub_parameters = [sub_parameters] for sub_param in sub_parameters: # Validate dict fields in case they came in as strings if isinstance(sub_param, string_types): sub_param = check_type_dict(sub_param) if not isinstance(sub_param, Mapping): raise TypeError("Value '{1}' in the sub parameter field '{0}' must by a {2}, " "not '{1.__class__.__name__}'".format(arg_name, sub_param, wanted_type)) no_log_values.update(_list_no_log_values(sub_argument_spec, sub_param)) return no_log_values def _return_datastructure_name(obj): """ Return native stringified values from datastructures. For use with removing sensitive values pre-jsonification.""" if isinstance(obj, (text_type, binary_type)): if obj: yield to_native(obj, errors='surrogate_or_strict') return elif isinstance(obj, Mapping): for element in obj.items(): for subelement in _return_datastructure_name(element[1]): yield subelement elif is_iterable(obj): for element in obj: for subelement in _return_datastructure_name(element): yield subelement elif isinstance(obj, (bool, NoneType)): # This must come before int because bools are also ints return elif isinstance(obj, tuple(list(integer_types) + [float])): yield to_native(obj, nonstring='simplerepr') else: raise TypeError('Unknown parameter type: %s' % (type(obj))) def _remove_values_conditions(value, no_log_strings, deferred_removals): """ Helper function for :meth:`remove_values`. :arg value: The value to check for strings that need to be stripped :arg no_log_strings: set of strings which must be stripped out of any values :arg deferred_removals: List which holds information about nested containers that have to be iterated for removals. It is passed into this function so that more entries can be added to it if value is a container type. The format of each entry is a 2-tuple where the first element is the ``value`` parameter and the second value is a new container to copy the elements of ``value`` into once iterated. :returns: if ``value`` is a scalar, returns ``value`` with two exceptions: 1. :class:`~datetime.datetime` objects which are changed into a string representation. 2. objects which are in no_log_strings are replaced with a placeholder so that no sensitive data is leaked. If ``value`` is a container type, returns a new empty container. ``deferred_removals`` is added to as a side-effect of this function. .. warning:: It is up to the caller to make sure the order in which value is passed in is correct. For instance, higher level containers need to be passed in before lower level containers. For example, given ``{'level1': {'level2': 'level3': [True]} }`` first pass in the dictionary for ``level1``, then the dict for ``level2``, and finally the list for ``level3``. """ if isinstance(value, (text_type, binary_type)): # Need native str type native_str_value = value if isinstance(value, text_type): value_is_text = True if PY2: native_str_value = to_bytes(value, errors='surrogate_or_strict') elif isinstance(value, binary_type): value_is_text = False if PY3: native_str_value = to_text(value, errors='surrogate_or_strict') if native_str_value in no_log_strings: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' for omit_me in no_log_strings: native_str_value = native_str_value.replace(omit_me, '*' * 8) if value_is_text and isinstance(native_str_value, binary_type): value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace') elif not value_is_text and isinstance(native_str_value, text_type): value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace') else: value = native_str_value elif isinstance(value, Sequence): if isinstance(value, MutableSequence): new_value = type(value)() else: new_value = [] # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, Set): if isinstance(value, MutableSet): new_value = type(value)() else: new_value = set() # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, Mapping): if isinstance(value, MutableMapping): new_value = type(value)() else: new_value = {} # Need a mutable value deferred_removals.append((value, new_value)) value = new_value elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))): stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict') if stringy_value in no_log_strings: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' for omit_me in no_log_strings: if omit_me in stringy_value: return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' elif isinstance(value, (datetime.datetime, datetime.date)): value = value.isoformat() else: raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) return value def _set_defaults(argument_spec, parameters, set_default=True): """Set default values for parameters when no value is supplied. Modifies parameters directly. :param argument_spec: Argument spec :type argument_spec: dict :param parameters: Parameters to evaluate :type parameters: dict :param set_default: Whether or not to set the default values :type set_default: bool :returns: Set of strings that should not be logged. :rtype: set """ no_log_values = set() for param, value in argument_spec.items(): # TODO: Change the default value from None to Sentinel to differentiate between # user supplied None and a default value set by this function. default = value.get('default', None) # This prevents setting defaults on required items on the 1st run, # otherwise will set things without a default to None on the 2nd. if param not in parameters and (default is not None or set_default): # Make sure any default value for no_log fields are masked. if value.get('no_log', False) and default: no_log_values.add(default) parameters[param] = default return no_log_values def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals): """ Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """ if isinstance(value, (text_type, binary_type)): return value if isinstance(value, Sequence): if isinstance(value, MutableSequence): new_value = type(value)() else: new_value = [] # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, Set): if isinstance(value, MutableSet): new_value = type(value)() else: new_value = set() # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, Mapping): if isinstance(value, MutableMapping): new_value = type(value)() else: new_value = {} # Need a mutable value deferred_removals.append((value, new_value)) return new_value if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))): return value if isinstance(value, (datetime.datetime, datetime.date)): return value raise TypeError('Value of unknown type: %s, %s' % (type(value), value)) def _validate_elements(wanted_type, parameter, values, options_context=None, errors=None): if errors is None: errors = AnsibleValidationErrorMultiple() type_checker, wanted_element_type = _get_type_validator(wanted_type) validated_parameters = [] # Get param name for strings so we can later display this value in a useful error message if needed # Only pass 'kwargs' to our checkers and ignore custom callable checkers kwargs = {} if wanted_element_type == 'str' and isinstance(wanted_type, string_types): if isinstance(parameter, string_types): kwargs['param'] = parameter elif isinstance(parameter, dict): kwargs['param'] = list(parameter.keys())[0] for value in values: try: validated_parameters.append(type_checker(value, **kwargs)) except (TypeError, ValueError) as e: msg = "Elements value for option '%s'" % parameter if options_context: msg += " found in '%s'" % " -> ".join(options_context) msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_element_type, to_native(e)) errors.append(ElementError(msg)) return validated_parameters def _validate_argument_types(argument_spec, parameters, prefix='', options_context=None, errors=None): """Validate that parameter types match the type in the argument spec. Determine the appropriate type checker function and run each parameter value through that function. All error messages from type checker functions are returned. If any parameter fails to validate, it will not be in the returned parameters. :param argument_spec: Argument spec :type argument_spec: dict :param parameters: Parameters :type parameters: dict :param prefix: Name of the parent key that contains the spec. Used in the error message :type prefix: str :param options_context: List of contexts? :type options_context: list :returns: Two item tuple containing validated and coerced parameters and a list of any errors that were encountered. :rtype: tuple """ if errors is None: errors = AnsibleValidationErrorMultiple() for param, spec in argument_spec.items(): if param not in parameters: continue value = parameters[param] if value is None: continue wanted_type = spec.get('type') type_checker, wanted_name = _get_type_validator(wanted_type) # Get param name for strings so we can later display this value in a useful error message if needed # Only pass 'kwargs' to our checkers and ignore custom callable checkers kwargs = {} if wanted_name == 'str' and isinstance(wanted_type, string_types): kwargs['param'] = list(parameters.keys())[0] # Get the name of the parent key if this is a nested option if prefix: kwargs['prefix'] = prefix try: parameters[param] = type_checker(value, **kwargs) elements_wanted_type = spec.get('elements', None) if elements_wanted_type: elements = parameters[param] if wanted_type != 'list' or not isinstance(elements, list): msg = "Invalid type %s for option '%s'" % (wanted_name, elements) if options_context: msg += " found in '%s'." % " -> ".join(options_context) msg += ", elements value check is supported only with 'list' type" errors.append(ArgumentTypeError(msg)) parameters[param] = _validate_elements(elements_wanted_type, param, elements, options_context, errors) except (TypeError, ValueError) as e: msg = "argument '%s' is of type %s" % (param, type(value)) if options_context: msg += " found in '%s'." % " -> ".join(options_context) msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e)) errors.append(ArgumentTypeError(msg)) def _validate_argument_values(argument_spec, parameters, options_context=None, errors=None): """Ensure all arguments have the requested values, and there are no stray arguments""" if errors is None: errors = AnsibleValidationErrorMultiple() for param, spec in argument_spec.items(): choices = spec.get('choices') if choices is None: continue if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)): if param in parameters: # Allow one or more when type='list' param with choices if isinstance(parameters[param], list): diff_list = ", ".join([item for item in parameters[param] if item not in choices]) if diff_list: choices_str = ", ".join([to_native(c) for c in choices]) msg = "value of %s must be one or more of: %s. Got no match for: %s" % (param, choices_str, diff_list) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentValueError(msg)) elif parameters[param] not in choices: # PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking # the value. If we can't figure this out, module author is responsible. lowered_choices = None if parameters[param] == 'False': lowered_choices = lenient_lowercase(choices) overlap = BOOLEANS_FALSE.intersection(choices) if len(overlap) == 1: # Extract from a set (parameters[param],) = overlap if parameters[param] == 'True': if lowered_choices is None: lowered_choices = lenient_lowercase(choices) overlap = BOOLEANS_TRUE.intersection(choices) if len(overlap) == 1: (parameters[param],) = overlap if parameters[param] not in choices: choices_str = ", ".join([to_native(c) for c in choices]) msg = "value of %s must be one of: %s, got: %s" % (param, choices_str, parameters[param]) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentValueError(msg)) else: msg = "internal error: choices for argument %s are not iterable: %s" % (param, choices) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) errors.append(ArgumentTypeError(msg)) def _validate_sub_spec(argument_spec, parameters, prefix='', options_context=None, errors=None, no_log_values=None, unsupported_parameters=None): """Validate sub argument spec. This function is recursive.""" if options_context is None: options_context = [] if errors is None: errors = AnsibleValidationErrorMultiple() if no_log_values is None: no_log_values = set() if unsupported_parameters is None: unsupported_parameters = set() for param, value in argument_spec.items(): wanted = value.get('type') if wanted == 'dict' or (wanted == 'list' and value.get('elements', '') == 'dict'): sub_spec = value.get('options') if value.get('apply_defaults', False): if sub_spec is not None: if parameters.get(param) is None: parameters[param] = {} else: continue elif sub_spec is None or param not in parameters or parameters[param] is None: continue # Keep track of context for warning messages options_context.append(param) # Make sure we can iterate over the elements if isinstance(parameters[param], dict): elements = [parameters[param]] else: elements = parameters[param] for idx, sub_parameters in enumerate(elements): if not isinstance(sub_parameters, dict): errors.append(SubParameterTypeError("value of '%s' must be of type dict or list of dicts" % param)) # Set prefix for warning messages new_prefix = prefix + param if wanted == 'list': new_prefix += '[%d]' % idx new_prefix += '.' no_log_values.update(set_fallbacks(sub_spec, sub_parameters)) alias_warnings = [] alias_deprecations = [] try: options_aliases = _handle_aliases(sub_spec, sub_parameters, alias_warnings, alias_deprecations) except (TypeError, ValueError) as e: options_aliases = {} errors.append(AliasError(to_native(e))) for option, alias in alias_warnings: warn('Both option %s and its alias %s are set.' % (option, alias)) try: no_log_values.update(_list_no_log_values(sub_spec, sub_parameters)) except TypeError as te: errors.append(NoLogError(to_native(te))) legal_inputs = _get_legal_inputs(sub_spec, sub_parameters, options_aliases) unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context)) try: check_mutually_exclusive(value.get('mutually_exclusive'), sub_parameters, options_context) except TypeError as e: errors.append(MutuallyExclusiveError(to_native(e))) no_log_values.update(_set_defaults(sub_spec, sub_parameters, False)) try: check_required_arguments(sub_spec, sub_parameters, options_context) except TypeError as e: errors.append(RequiredError(to_native(e))) _validate_argument_types(sub_spec, sub_parameters, new_prefix, options_context, errors=errors) _validate_argument_values(sub_spec, sub_parameters, options_context, errors=errors) for check in _ADDITIONAL_CHECKS: try: check['func'](value.get(check['attr']), sub_parameters, options_context) except TypeError as e: errors.append(check['err'](to_native(e))) no_log_values.update(_set_defaults(sub_spec, sub_parameters)) # Handle nested specs _validate_sub_spec(sub_spec, sub_parameters, new_prefix, options_context, errors, no_log_values, unsupported_parameters) options_context.pop() def env_fallback(*args, **kwargs): """Load value from environment variable""" for arg in args: if arg in os.environ: return os.environ[arg] raise AnsibleFallbackNotFound def set_fallbacks(argument_spec, parameters): no_log_values = set() for param, value in argument_spec.items(): fallback = value.get('fallback', (None,)) fallback_strategy = fallback[0] fallback_args = [] fallback_kwargs = {} if param not in parameters and fallback_strategy is not None: for item in fallback[1:]: if isinstance(item, dict): fallback_kwargs = item else: fallback_args = item try: fallback_value = fallback_strategy(*fallback_args, **fallback_kwargs) except AnsibleFallbackNotFound: continue else: if value.get('no_log', False) and fallback_value: no_log_values.add(fallback_value) parameters[param] = fallback_value return no_log_values def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()): """ Sanitize the keys in a container object by removing no_log values from key names. This is a companion function to the `remove_values()` function. Similar to that function, we make use of deferred_removals to avoid hitting maximum recursion depth in cases of large data structures. :param obj: The container object to sanitize. Non-container objects are returned unmodified. :param no_log_strings: A set of string values we do not want logged. :param ignore_keys: A set of string values of keys to not sanitize. :returns: An object with sanitized keys. """ deferred_removals = deque() no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings] new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals) while deferred_removals: old_data, new_data = deferred_removals.popleft() if isinstance(new_data, Mapping): for old_key, old_elem in old_data.items(): if old_key in ignore_keys or old_key.startswith('_ansible'): new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals) else: # Sanitize the old key. We take advantage of the sanitizing code in # _remove_values_conditions() rather than recreating it here. new_key = _remove_values_conditions(old_key, no_log_strings, None) new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals) else: for elem in old_data: new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals) if isinstance(new_data, MutableSequence): new_data.append(new_elem) elif isinstance(new_data, MutableSet): new_data.add(new_elem) else: raise TypeError('Unknown container type encountered when removing private values from keys') return new_value def remove_values(value, no_log_strings): """ Remove strings in no_log_strings from value. If value is a container type, then remove a lot more. Use of deferred_removals exists, rather than a pure recursive solution, because of the potential to hit the maximum recursion depth when dealing with large amounts of data (see issue #24560). """ deferred_removals = deque() no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings] new_value = _remove_values_conditions(value, no_log_strings, deferred_removals) while deferred_removals: old_data, new_data = deferred_removals.popleft() if isinstance(new_data, Mapping): for old_key, old_elem in old_data.items(): new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals) new_data[old_key] = new_elem else: for elem in old_data: new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals) if isinstance(new_data, MutableSequence): new_data.append(new_elem) elif isinstance(new_data, MutableSet): new_data.add(new_elem) else: raise TypeError('Unknown container type encountered when removing private values from output') return new_value
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
lib/ansible/module_utils/common/validation.py
# -*- coding: utf-8 -*- # Copyright (c) 2019 Ansible Project # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type import os import re from ast import literal_eval from ansible.module_utils._text import to_native from ansible.module_utils.common._json_compat import json from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.text.converters import jsonify from ansible.module_utils.common.text.formatters import human_to_bytes from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import ( binary_type, integer_types, string_types, text_type, ) def count_terms(terms, parameters): """Count the number of occurrences of a key in a given dictionary :arg terms: String or iterable of values to check :arg parameters: Dictionary of parameters :returns: An integer that is the number of occurrences of the terms values in the provided dictionary. """ if not is_iterable(terms): terms = [terms] return len(set(terms).intersection(parameters)) def safe_eval(value, locals=None, include_exceptions=False): # do not allow method calls to modules if not isinstance(value, string_types): # already templated to a datavaluestructure, perhaps? if include_exceptions: return (value, None) return value if re.search(r'\w\.\w+\(', value): if include_exceptions: return (value, None) return value # do not allow imports if re.search(r'import \w+', value): if include_exceptions: return (value, None) return value try: result = literal_eval(value) if include_exceptions: return (result, None) else: return result except Exception as e: if include_exceptions: return (value, e) return value def check_mutually_exclusive(terms, parameters, options_context=None): """Check mutually exclusive terms against argument parameters Accepts a single list or list of lists that are groups of terms that should be mutually exclusive with one another :arg terms: List of mutually exclusive parameters :arg parameters: Dictionary of parameters :returns: Empty list or raises TypeError if the check fails. """ results = [] if terms is None: return results for check in terms: count = count_terms(check, parameters) if count > 1: results.append(check) if results: full_list = ['|'.join(check) for check in results] msg = "parameters are mutually exclusive: %s" % ', '.join(full_list) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return results def check_required_one_of(terms, parameters, options_context=None): """Check each list of terms to ensure at least one exists in the given module parameters Accepts a list of lists or tuples :arg terms: List of lists of terms to check. For each list of terms, at least one is required. :arg parameters: Dictionary of parameters :returns: Empty list or raises TypeError if the check fails. """ results = [] if terms is None: return results for term in terms: count = count_terms(term, parameters) if count == 0: results.append(term) if results: for term in results: msg = "one of the following is required: %s" % ', '.join(term) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return results def check_required_together(terms, parameters, options_context=None): """Check each list of terms to ensure every parameter in each list exists in the given parameters Accepts a list of lists or tuples :arg terms: List of lists of terms to check. Each list should include parameters that are all required when at least one is specified in the parameters. :arg parameters: Dictionary of parameters :returns: Empty list or raises TypeError if the check fails. """ results = [] if terms is None: return results for term in terms: counts = [count_terms(field, parameters) for field in term] non_zero = [c for c in counts if c > 0] if len(non_zero) > 0: if 0 in counts: results.append(term) if results: for term in results: msg = "parameters are required together: %s" % ', '.join(term) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return results def check_required_by(requirements, parameters, options_context=None): """For each key in requirements, check the corresponding list to see if they exist in parameters Accepts a single string or list of values for each key :arg requirements: Dictionary of requirements :arg parameters: Dictionary of parameters :returns: Empty dictionary or raises TypeError if the """ result = {} if requirements is None: return result for (key, value) in requirements.items(): if key not in parameters or parameters[key] is None: continue result[key] = [] # Support strings (single-item lists) if isinstance(value, string_types): value = [value] for required in value: if required not in parameters or parameters[required] is None: result[key].append(required) if result: for key, missing in result.items(): if len(missing) > 0: msg = "missing parameter(s) required by '%s': %s" % (key, ', '.join(missing)) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return result def check_required_arguments(argument_spec, parameters, options_context=None): """Check all paramaters in argument_spec and return a list of parameters that are required but not present in parameters Raises TypeError if the check fails :arg argument_spec: Argument spec dicitionary containing all parameters and their specification :arg module_paramaters: Dictionary of parameters :returns: Empty list or raises TypeError if the check fails. """ missing = [] if argument_spec is None: return missing for (k, v) in argument_spec.items(): required = v.get('required', False) if required and k not in parameters: missing.append(k) if missing: msg = "missing required arguments: %s" % ", ".join(sorted(missing)) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return missing def check_required_if(requirements, parameters, options_context=None): """Check parameters that are conditionally required Raises TypeError if the check fails :arg requirements: List of lists specifying a parameter, value, parameters required when the given parameter is the specified value, and optionally a boolean indicating any or all parameters are required. Example: required_if=[ ['state', 'present', ('path',), True], ['someint', 99, ('bool_param', 'string_param')], ] :arg module_paramaters: Dictionary of parameters :returns: Empty list or raises TypeError if the check fails. The results attribute of the exception contains a list of dictionaries. Each dictionary is the result of evaluting each item in requirements. Each return dictionary contains the following keys: :key missing: List of parameters that are required but missing :key requires: 'any' or 'all' :key paramater: Parameter name that has the requirement :key value: Original value of the paramater :key requirements: Original required parameters Example: [ { 'parameter': 'someint', 'value': 99 'requirements': ('bool_param', 'string_param'), 'missing': ['string_param'], 'requires': 'all', } ] """ results = [] if requirements is None: return results for req in requirements: missing = {} missing['missing'] = [] max_missing_count = 0 is_one_of = False if len(req) == 4: key, val, requirements, is_one_of = req else: key, val, requirements = req # is_one_of is True at least one requirement should be # present, else all requirements should be present. if is_one_of: max_missing_count = len(requirements) missing['requires'] = 'any' else: missing['requires'] = 'all' if key in parameters and parameters[key] == val: for check in requirements: count = count_terms(check, parameters) if count == 0: missing['missing'].append(check) if len(missing['missing']) and len(missing['missing']) >= max_missing_count: missing['parameter'] = key missing['value'] = val missing['requirements'] = requirements results.append(missing) if results: for missing in results: msg = "%s is %s but %s of the following are missing: %s" % ( missing['parameter'], missing['value'], missing['requires'], ', '.join(missing['missing'])) if options_context: msg = "{0} found in {1}".format(msg, " -> ".join(options_context)) raise TypeError(to_native(msg)) return results def check_missing_parameters(parameters, required_parameters=None): """This is for checking for required params when we can not check via argspec because we need more information than is simply given in the argspec. Raises TypeError if any required parameters are missing :arg module_paramaters: Dictionary of parameters :arg required_parameters: List of parameters to look for in the given module parameters :returns: Empty list or raises TypeError if the check fails. """ missing_params = [] if required_parameters is None: return missing_params for param in required_parameters: if not parameters.get(param): missing_params.append(param) if missing_params: msg = "missing required arguments: %s" % ', '.join(missing_params) raise TypeError(to_native(msg)) return missing_params # FIXME: The param and prefix parameters here are coming from AnsibleModule._check_type_string() # which is using those for the warning messaged based on string conversion warning settings. # Not sure how to deal with that here since we don't have config state to query. def check_type_str(value, allow_conversion=True, param=None, prefix=''): """Verify that the value is a string or convert to a string. Since unexpected changes can sometimes happen when converting to a string, ``allow_conversion`` controls whether or not the value will be converted or a TypeError will be raised if the value is not a string and would be converted :arg value: Value to validate or convert to a string :arg allow_conversion: Whether to convert the string and return it or raise a TypeError :returns: Original value if it is a string, the value converted to a string if allow_conversion=True, or raises a TypeError if allow_conversion=False. """ if isinstance(value, string_types): return value if allow_conversion: return to_native(value, errors='surrogate_or_strict') msg = "'{0!r}' is not a string and conversion is not allowed".format(value) raise TypeError(to_native(msg)) def check_type_list(value): """Verify that the value is a list or convert to a list A comma separated string will be split into a list. Rases a TypeError if unable to convert to a list. :arg value: Value to validate or convert to a list :returns: Original value if it is already a list, single item list if a float, int or string without commas, or a multi-item list if a comma-delimited string. """ if isinstance(value, list): return value if isinstance(value, string_types): return value.split(",") elif isinstance(value, int) or isinstance(value, float): return [str(value)] raise TypeError('%s cannot be converted to a list' % type(value)) def check_type_dict(value): """Verify that value is a dict or convert it to a dict and return it. Raises TypeError if unable to convert to a dict :arg value: Dict or string to convert to a dict. Accepts 'k1=v2, k2=v2'. :returns: value converted to a dictionary """ if isinstance(value, dict): return value if isinstance(value, string_types): if value.startswith("{"): try: return json.loads(value) except Exception: (result, exc) = safe_eval(value, dict(), include_exceptions=True) if exc is not None: raise TypeError('unable to evaluate string as dictionary') return result elif '=' in value: fields = [] field_buffer = [] in_quote = False in_escape = False for c in value.strip(): if in_escape: field_buffer.append(c) in_escape = False elif c == '\\': in_escape = True elif not in_quote and c in ('\'', '"'): in_quote = c elif in_quote and in_quote == c: in_quote = False elif not in_quote and c in (',', ' '): field = ''.join(field_buffer) if field: fields.append(field) field_buffer = [] else: field_buffer.append(c) field = ''.join(field_buffer) if field: fields.append(field) return dict(x.split("=", 1) for x in fields) else: raise TypeError("dictionary requested, could not parse JSON or key=value") raise TypeError('%s cannot be converted to a dict' % type(value)) def check_type_bool(value): """Verify that the value is a bool or convert it to a bool and return it. Raises TypeError if unable to convert to a bool :arg value: String, int, or float to convert to bool. Valid booleans include: '1', 'on', 1, '0', 0, 'n', 'f', 'false', 'true', 'y', 't', 'yes', 'no', 'off' :returns: Boolean True or False """ if isinstance(value, bool): return value if isinstance(value, string_types) or isinstance(value, (int, float)): return boolean(value) raise TypeError('%s cannot be converted to a bool' % type(value)) def check_type_int(value): """Verify that the value is an integer and return it or convert the value to an integer and return it Raises TypeError if unable to convert to an int :arg value: String or int to convert of verify :return: Int of given value """ if isinstance(value, integer_types): return value if isinstance(value, string_types): try: return int(value) except ValueError: pass raise TypeError('%s cannot be converted to an int' % type(value)) def check_type_float(value): """Verify that value is a float or convert it to a float and return it Raises TypeError if unable to convert to a float :arg value: Float, int, str, or bytes to verify or convert and return. :returns: Float of given value. """ if isinstance(value, float): return value if isinstance(value, (binary_type, text_type, int)): try: return float(value) except ValueError: pass raise TypeError('%s cannot be converted to a float' % type(value)) def check_type_path(value,): """Verify the provided value is a string or convert it to a string, then return the expanded path """ value = check_type_str(value) return os.path.expanduser(os.path.expandvars(value)) def check_type_raw(value): """Returns the raw value """ return value def check_type_bytes(value): """Convert a human-readable string value to bytes Raises TypeError if unable to covert the value """ try: return human_to_bytes(value) except ValueError: raise TypeError('%s cannot be converted to a Byte value' % type(value)) def check_type_bits(value): """Convert a human-readable string bits value to bits in integer. Example: check_type_bits('1Mb') returns integer 1048576. Raises TypeError if unable to covert the value. """ try: return human_to_bytes(value, isbits=True) except ValueError: raise TypeError('%s cannot be converted to a Bit value' % type(value)) def check_type_jsonarg(value): """Return a jsonified string. Sometimes the controller turns a json string into a dict/list so transform it back into json here Raises TypeError if unable to covert the value """ if isinstance(value, (text_type, binary_type)): return value.strip() elif isinstance(value, (list, tuple, dict)): return jsonify(value) raise TypeError('%s cannot be converted to a json string' % type(value))
closed
ansible/ansible
https://github.com/ansible/ansible
73,983
argument spec refactoring breaks some things
### Summary I'm seeing some breakage in collections caused (very likely) by #73703: 1. `AttributeError: 'AnsibleModule' object has no attribute '_check_type_dict'` (community.general, infoblox.nios_modules); 2. `ImportError: cannot import name 'handle_aliases'` (from `ansible.module_utils.common.parameters`) (community.crypto, community.sops). While 1. could be considered 'own fault' because it is a private API, 2. looks like a bug, since that's a break in a public interface. ### Issue Type Bug Report ### Component Name core ### Ansible Version devel ### Configuration . ### OS / Environment . ### Steps to Reproduce . ### Expected Results . ### Actual Results .
https://github.com/ansible/ansible/issues/73983
https://github.com/ansible/ansible/pull/74268
6e56e72d9966999911b572fc2856a66beb48276f
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
2021-03-20T09:33:08Z
python
2021-04-20T19:40:53Z
lib/ansible/module_utils/errors.py
# -*- coding: utf-8 -*- # Copyright (c) 2021 Ansible Project # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type class AnsibleFallbackNotFound(Exception): """Fallback validator was not found""" class AnsibleValidationError(Exception): """Single argument spec validation error""" def __init__(self, message): super(AnsibleValidationError, self).__init__(message) self.error_message = message @property def msg(self): return self.args[0] class AnsibleValidationErrorMultiple(AnsibleValidationError): """Multiple argument spec validation errors""" def __init__(self, errors=None): self.errors = errors[:] if errors else [] def __getitem__(self, key): return self.errors[key] def __setitem__(self, key, value): self.errors[key] = value def __delitem__(self, key): del self.errors[key] @property def msg(self): return self.errors[0].args[0] @property def messages(self): return [err.msg for err in self.errors] def append(self, error): self.errors.append(error) def extend(self, errors): self.errors.extend(errors) class AliasError(AnsibleValidationError): """Error handling aliases""" class ArgumentTypeError(AnsibleValidationError): """Error with parameter type""" class ArgumentValueError(AnsibleValidationError): """Error with parameter value""" class ElementError(AnsibleValidationError): """Error when validating elements""" class MutuallyExclusiveError(AnsibleValidationError): """Mutually exclusive parameters were supplied""" class NoLogError(AnsibleValidationError): """Error converting no_log values""" class RequiredByError(AnsibleValidationError): """Error with parameters that are required by other parameters""" class RequiredDefaultError(AnsibleValidationError): """A required parameter was assigned a default value""" class RequiredError(AnsibleValidationError): """Missing a required parameter""" class RequiredIfError(AnsibleValidationError): """Error with conditionally required parameters""" class RequiredOneOfError(AnsibleValidationError): """Error with parameters where at least one is required""" class RequiredTogetherError(AnsibleValidationError): """Error with parameters that are required together""" class SubParameterTypeError(AnsibleValidationError): """Incorrect type for subparameter""" class UnsupportedError(AnsibleValidationError): """Unsupported parameters were supplied"""
closed
ansible/ansible
https://github.com/ansible/ansible
74,350
ansible-test broken in dev
### Summary My unit tests in the following repo broke recently... https://github.com/ansible-collections/community.mongodb/blob/master/.github/workflows/ansible-test.yml The "Generate Coverage Report" started failing with the following output.. ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.13/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.13/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 175, in main args.func(config) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/xml.py", line 51, in command_coverage_xml output_files = command_coverage_combine(args) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/combine.py", line 46, in command_coverage_combine paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/combine.py", line 138, in _command_coverage_combine_powershell coverage_files = get_powershell_coverage_files() File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 120, in get_powershell_coverage_files return get_coverage_files('powershell', path) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 126, in get_coverage_files coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir) FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/community.mongodb/community.mongodb/ansible_collections/community/mongodb/tests/output/coverage' Error: Process completed with exit code 1. ``` It's actually the previous task that was failing... ``` Run ansible-test units -v --color --python 3.6 --coverage WARNING: All targets skipped. ``` and thus the tests/output directory was not created. After going down a bit of rabbit hole looking at changes in my code I think it's probably an update to the dev branch of ansible. I added a few stable versions of ansible to the unit test strategy matrix and they both pass... https://github.com/ansible-collections/community.mongodb/runs/2392660568?check_suite_focus=true ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version ```console $ ansible --version dev ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment GitHub Actions ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Expected Results Unit tests are executed as they do when stable-2.11 and stable-2.10 are installed. ### Actual Results ```console Unit tests are not currently executed by ansible-test when installed from dev. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74350
https://github.com/ansible/ansible/pull/74357
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
e6af2d6827c525ac36e95ba5145b5632a4131fff
2021-04-20T16:44:24Z
python
2021-04-20T21:33:05Z
changelogs/fragments/ansible-test-coverage-traceback.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,350
ansible-test broken in dev
### Summary My unit tests in the following repo broke recently... https://github.com/ansible-collections/community.mongodb/blob/master/.github/workflows/ansible-test.yml The "Generate Coverage Report" started failing with the following output.. ``` Run ansible-test coverage xml -v --requirements --group-by command --group-by version Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.6.13/x64/bin/ansible-test", line 28, in <module> main() File "/opt/hostedtoolcache/Python/3.6.13/x64/bin/ansible-test", line 24, in main cli_main() File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/cli.py", line 175, in main args.func(config) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/xml.py", line 51, in command_coverage_xml output_files = command_coverage_combine(args) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/combine.py", line 46, in command_coverage_combine paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/combine.py", line 138, in _command_coverage_combine_powershell coverage_files = get_powershell_coverage_files() File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 120, in get_powershell_coverage_files return get_coverage_files('powershell', path) File "/opt/hostedtoolcache/Python/3.6.13/x64/lib/python3.6/site-packages/ansible_test/_internal/coverage/__init__.py", line 126, in get_coverage_files coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir) FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/community.mongodb/community.mongodb/ansible_collections/community/mongodb/tests/output/coverage' Error: Process completed with exit code 1. ``` It's actually the previous task that was failing... ``` Run ansible-test units -v --color --python 3.6 --coverage WARNING: All targets skipped. ``` and thus the tests/output directory was not created. After going down a bit of rabbit hole looking at changes in my code I think it's probably an update to the dev branch of ansible. I added a few stable versions of ansible to the unit test strategy matrix and they both pass... https://github.com/ansible-collections/community.mongodb/runs/2392660568?check_suite_focus=true ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version ```console $ ansible --version dev ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment GitHub Actions ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Expected Results Unit tests are executed as they do when stable-2.11 and stable-2.10 are installed. ### Actual Results ```console Unit tests are not currently executed by ansible-test when installed from dev. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74350
https://github.com/ansible/ansible/pull/74357
2cbfd1e350cbe1ca195d33306b5a9628667ddda8
e6af2d6827c525ac36e95ba5145b5632a4131fff
2021-04-20T16:44:24Z
python
2021-04-20T21:33:05Z
test/lib/ansible_test/_internal/coverage/__init__.py
"""Common logic for the coverage subcommand.""" from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import re from .. import types as t from ..encoding import ( to_bytes, ) from ..io import ( open_binary_file, read_json_file, ) from ..util import ( ApplicationError, common_environment, display, ANSIBLE_TEST_DATA_ROOT, ) from ..util_common import ( intercept_command, ResultType, ) from ..config import ( EnvironmentConfig, ) from ..executor import ( Delegate, install_command_requirements, ) from .. target import ( walk_module_targets, ) from ..data import ( data_context, ) if t.TYPE_CHECKING: import coverage as coverage_module COVERAGE_GROUPS = ('command', 'target', 'environment', 'version') COVERAGE_CONFIG_PATH = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'coveragerc') COVERAGE_OUTPUT_FILE_NAME = 'coverage' class CoverageConfig(EnvironmentConfig): """Configuration for the coverage command.""" def __init__(self, args): # type: (t.Any) -> None super(CoverageConfig, self).__init__(args, 'coverage') self.group_by = frozenset(args.group_by) if 'group_by' in args and args.group_by else set() # type: t.FrozenSet[str] self.all = args.all if 'all' in args else False # type: bool self.stub = args.stub if 'stub' in args else False # type: bool self.export = args.export if 'export' in args else None # type: str self.coverage = False # temporary work-around to support intercept_command in cover.py def initialize_coverage(args): # type: (CoverageConfig) -> coverage_module """Delegate execution if requested, install requirements, then import and return the coverage module. Raises an exception if coverage is not available.""" if args.delegate: raise Delegate() if args.requirements: install_command_requirements(args) try: import coverage except ImportError: coverage = None if not coverage: raise ApplicationError('You must install the "coverage" python module to use this command.') coverage_version_string = coverage.__version__ coverage_version = tuple(int(v) for v in coverage_version_string.split('.')) min_version = (4, 2) max_version = (5, 0) supported_version = True recommended_version = '4.5.4' if coverage_version < min_version or coverage_version >= max_version: supported_version = False if not supported_version: raise ApplicationError('Version %s of "coverage" is not supported. Version %s is known to work and is recommended.' % ( coverage_version_string, recommended_version)) return coverage def run_coverage(args, output_file, command, cmd): # type: (CoverageConfig, str, str, t.List[str]) -> None """Run the coverage cli tool with the specified options.""" env = common_environment() env.update(dict(COVERAGE_FILE=output_file)) cmd = ['python', '-m', 'coverage.__main__', command, '--rcfile', COVERAGE_CONFIG_PATH] + cmd intercept_command(args, target_name='coverage', env=env, cmd=cmd, disable_coverage=True) def get_python_coverage_files(path=None): # type: (t.Optional[str]) -> t.List[str] """Return the list of Python coverage file paths.""" return get_coverage_files('python', path) def get_powershell_coverage_files(path=None): # type: (t.Optional[str]) -> t.List[str] """Return the list of PowerShell coverage file paths.""" return get_coverage_files('powershell', path) def get_coverage_files(language, path=None): # type: (str, t.Optional[str]) -> t.List[str] """Return the list of coverage file paths for the given language.""" coverage_dir = path or ResultType.COVERAGE.path coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir) if '=coverage.' in f and '=%s' % language in f] return coverage_files def get_collection_path_regexes(): # type: () -> t.Tuple[t.Optional[t.Pattern], t.Optional[t.Pattern]] """Return a pair of regexes used for identifying and manipulating collection paths.""" if data_context().content.collection: collection_search_re = re.compile(r'/%s/' % data_context().content.collection.directory) collection_sub_re = re.compile(r'^.*?/%s/' % data_context().content.collection.directory) else: collection_search_re = None collection_sub_re = None return collection_search_re, collection_sub_re def get_python_modules(): # type: () -> t.Dict[str, str] """Return a dictionary of Ansible module names and their paths.""" return dict((target.module, target.path) for target in list(walk_module_targets()) if target.path.endswith('.py')) def enumerate_python_arcs( path, # type: str coverage, # type: coverage_module modules, # type: t.Dict[str, str] collection_search_re, # type: t.Optional[t.Pattern] collection_sub_re, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Generator[t.Tuple[str, t.Set[t.Tuple[int, int]]]] """Enumerate Python code coverage arcs in the given file.""" if os.path.getsize(path) == 0: display.warning('Empty coverage file: %s' % path, verbosity=2) return original = coverage.CoverageData() try: original.read_file(path) except Exception as ex: # pylint: disable=locally-disabled, broad-except with open_binary_file(path) as file_obj: header = file_obj.read(6) if header == b'SQLite': display.error('File created by "coverage" 5.0+: %s' % os.path.relpath(path)) else: display.error(u'%s' % ex) return for filename in original.measured_files(): arcs = original.arcs(filename) if not arcs: # This is most likely due to using an unsupported version of coverage. display.warning('No arcs found for "%s" in coverage file: %s' % (filename, path)) continue filename = sanitize_filename(filename, modules=modules, collection_search_re=collection_search_re, collection_sub_re=collection_sub_re) if not filename: continue yield filename, set(arcs) def enumerate_powershell_lines( path, # type: str collection_search_re, # type: t.Optional[t.Pattern] collection_sub_re, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Generator[t.Tuple[str, t.Dict[int, int]]] """Enumerate PowerShell code coverage lines in the given file.""" if os.path.getsize(path) == 0: display.warning('Empty coverage file: %s' % path, verbosity=2) return try: coverage_run = read_json_file(path) except Exception as ex: # pylint: disable=locally-disabled, broad-except display.error(u'%s' % ex) return for filename, hits in coverage_run.items(): filename = sanitize_filename(filename, collection_search_re=collection_search_re, collection_sub_re=collection_sub_re) if not filename: continue if isinstance(hits, dict) and not hits.get('Line'): # Input data was previously aggregated and thus uses the standard ansible-test output format for PowerShell coverage. # This format differs from the more verbose format of raw coverage data from the remote Windows hosts. hits = dict((int(key), value) for key, value in hits.items()) yield filename, hits continue # PowerShell unpacks arrays if there's only a single entry so this is a defensive check on that if not isinstance(hits, list): hits = [hits] hits = dict((hit['Line'], hit['HitCount']) for hit in hits if hit) yield filename, hits def sanitize_filename( filename, # type: str modules=None, # type: t.Optional[t.Dict[str, str]] collection_search_re=None, # type: t.Optional[t.Pattern] collection_sub_re=None, # type: t.Optional[t.Pattern] ): # type: (...) -> t.Optional[str] """Convert the given code coverage path to a local absolute path and return its, or None if the path is not valid.""" ansible_path = os.path.abspath('lib/ansible/') + '/' root_path = data_context().content.root + '/' integration_temp_path = os.path.sep + os.path.join(ResultType.TMP.relative_path, 'integration') + os.path.sep if modules is None: modules = {} if '/ansible_modlib.zip/ansible/' in filename: # Rewrite the module_utils path from the remote host to match the controller. Ansible 2.6 and earlier. new_name = re.sub('^.*/ansible_modlib.zip/ansible/', ansible_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif collection_search_re and collection_search_re.search(filename): new_name = os.path.abspath(collection_sub_re.sub('', filename)) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search(r'/ansible_[^/]+_payload\.zip/ansible/', filename): # Rewrite the module_utils path from the remote host to match the controller. Ansible 2.7 and later. new_name = re.sub(r'^.*/ansible_[^/]+_payload\.zip/ansible/', ansible_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif '/ansible_module_' in filename: # Rewrite the module path from the remote host to match the controller. Ansible 2.6 and earlier. module_name = re.sub('^.*/ansible_module_(?P<module>.*).py$', '\\g<module>', filename) if module_name not in modules: display.warning('Skipping coverage of unknown module: %s' % module_name) return None new_name = os.path.abspath(modules[module_name]) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search(r'/ansible_[^/]+_payload(_[^/]+|\.zip)/__main__\.py$', filename): # Rewrite the module path from the remote host to match the controller. Ansible 2.7 and later. # AnsiballZ versions using zipimporter will match the `.zip` portion of the regex. # AnsiballZ versions not using zipimporter will match the `_[^/]+` portion of the regex. module_name = re.sub(r'^.*/ansible_(?P<module>[^/]+)_payload(_[^/]+|\.zip)/__main__\.py$', '\\g<module>', filename).rstrip('_') if module_name not in modules: display.warning('Skipping coverage of unknown module: %s' % module_name) return None new_name = os.path.abspath(modules[module_name]) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif re.search('^(/.*?)?/root/ansible/', filename): # Rewrite the path of code running on a remote host or in a docker container as root. new_name = re.sub('^(/.*?)?/root/ansible/', root_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name elif integration_temp_path in filename: # Rewrite the path of code running from an integration test temporary directory. new_name = re.sub(r'^.*' + re.escape(integration_temp_path) + '[^/]+/', root_path, filename) display.info('%s -> %s' % (filename, new_name), verbosity=3) filename = new_name filename = os.path.abspath(filename) # make sure path is absolute (will be relative if previously exported) return filename class PathChecker: """Checks code coverage paths to verify they are valid and reports on the findings.""" def __init__(self, args, collection_search_re=None): # type: (CoverageConfig, t.Optional[t.Pattern]) -> None self.args = args self.collection_search_re = collection_search_re self.invalid_paths = [] self.invalid_path_chars = 0 def check_path(self, path): # type: (str) -> bool """Return True if the given coverage path is valid, otherwise display a warning and return False.""" if os.path.isfile(to_bytes(path)): return True if self.collection_search_re and self.collection_search_re.search(path) and os.path.basename(path) == '__init__.py': # the collection loader uses implicit namespace packages, so __init__.py does not need to exist on disk # coverage is still reported for these non-existent files, but warnings are not needed return False self.invalid_paths.append(path) self.invalid_path_chars += len(path) if self.args.verbosity > 1: display.warning('Invalid coverage path: %s' % path) return False def report(self): # type: () -> None """Display a warning regarding invalid paths if any were found.""" if self.invalid_paths: display.warning('Ignored %d characters from %d invalid coverage path(s).' % (self.invalid_path_chars, len(self.invalid_paths)))
closed
ansible/ansible
https://github.com/ansible/ansible
74,275
No description for "COLLECTIONS_SCAN_SYS_PATH"
### Summary In https://docs.ansible.com/ansible/latest/reference_appendices/config.html#collections-scan-sys-path, there is no description for "COLLECTIONS_SCAN_SYS_PATH" - would be great to explain in detail what it does. ### Issue Type Documentation Report ### Component Name reference_appendices/config ### Ansible Version ```console Not relevant ``` ### Configuration ```console Not relevant ``` ### OS / Environment Not relevant ### Additional Information It helps to understand what the param does ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74275
https://github.com/ansible/ansible/pull/74351
14ff5e213cd084480d628ec0562200b174b6fa79
567361b124e79873537704bed7625141c33f35a8
2021-04-14T12:01:19Z
python
2021-04-22T19:08:52Z
lib/ansible/config/base.yml
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- ALLOW_WORLD_READABLE_TMPFILES: name: Allow world-readable temporary files default: False description: - This setting has been moved to the individual shell plugins as a plugin option :ref:`shell_plugins`. - The existing configuration settings are still accepted with the shell plugin adding additional options, like variables. - This message will be removed in 2.14. type: boolean ANSIBLE_CONNECTION_PATH: name: Path of ansible-connection script default: null description: - Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. - If null, ansible will start with the same directory as the ansible script. type: path env: [{name: ANSIBLE_CONNECTION_PATH}] ini: - {key: ansible_connection_path, section: persistent_connection} yaml: {key: persistent_connection.ansible_connection_path} version_added: "2.8" ANSIBLE_COW_SELECTION: name: Cowsay filter selection default: default description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them. env: [{name: ANSIBLE_COW_SELECTION}] ini: - {key: cow_selection, section: defaults} ANSIBLE_COW_ACCEPTLIST: name: Cowsay filter acceptance list default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www'] description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates. env: - name: ANSIBLE_COW_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_COW_ACCEPTLIST' - name: ANSIBLE_COW_ACCEPTLIST version_added: '2.11' ini: - key: cow_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'cowsay_enabled_stencils' - key: cowsay_enabled_stencils section: defaults version_added: '2.11' type: list ANSIBLE_FORCE_COLOR: name: Force color output default: False description: This option forces color mode even when running without a TTY or the "nocolor" setting is True. env: [{name: ANSIBLE_FORCE_COLOR}] ini: - {key: force_color, section: defaults} type: boolean yaml: {key: display.force_color} ANSIBLE_NOCOLOR: name: Suppress color output default: False description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information. env: - name: ANSIBLE_NOCOLOR # this is generic convention for CLI programs - name: NO_COLOR version_added: '2.11' ini: - {key: nocolor, section: defaults} type: boolean yaml: {key: display.nocolor} ANSIBLE_NOCOWS: name: Suppress cowsay output default: False description: If you have cowsay installed but want to avoid the 'cows' (why????), use this. env: [{name: ANSIBLE_NOCOWS}] ini: - {key: nocows, section: defaults} type: boolean yaml: {key: display.i_am_no_fun} ANSIBLE_COW_PATH: name: Set path to cowsay command default: null description: Specify a custom cowsay path or swap in your cowsay implementation of choice env: [{name: ANSIBLE_COW_PATH}] ini: - {key: cowpath, section: defaults} type: string yaml: {key: display.cowpath} ANSIBLE_PIPELINING: name: Connection pipelining default: False description: - Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. - This can result in a very significant performance improvement when enabled. - "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default." - This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled. - This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all. env: - name: ANSIBLE_PIPELINING ini: - section: defaults key: pipelining - section: connection key: pipelining type: boolean ANY_ERRORS_FATAL: name: Make Task failures fatal default: False description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors. env: - name: ANSIBLE_ANY_ERRORS_FATAL ini: - section: defaults key: any_errors_fatal type: boolean yaml: {key: errors.any_task_errors_fatal} version_added: "2.4" BECOME_ALLOW_SAME_USER: name: Allow becoming the same user default: False description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root. env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}] ini: - {key: become_allow_same_user, section: privilege_escalation} type: boolean yaml: {key: privilege_escalation.become_allow_same_user} AGNOSTIC_BECOME_PROMPT: name: Display an agnostic become prompt default: True type: boolean description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}] ini: - {key: agnostic_become_prompt, section: privilege_escalation} yaml: {key: privilege_escalation.agnostic_become_prompt} version_added: "2.5" CACHE_PLUGIN: name: Persistent Cache plugin default: memory description: Chooses which cache plugin to use, the default 'memory' is ephemeral. env: [{name: ANSIBLE_CACHE_PLUGIN}] ini: - {key: fact_caching, section: defaults} yaml: {key: facts.cache.plugin} CACHE_PLUGIN_CONNECTION: name: Cache Plugin URI default: ~ description: Defines connection or path information for the cache plugin env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}] ini: - {key: fact_caching_connection, section: defaults} yaml: {key: facts.cache.uri} CACHE_PLUGIN_PREFIX: name: Cache Plugin table prefix default: ansible_facts description: Prefix to use for cache plugin files/tables env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}] ini: - {key: fact_caching_prefix, section: defaults} yaml: {key: facts.cache.prefix} CACHE_PLUGIN_TIMEOUT: name: Cache Plugin expiration timeout default: 86400 description: Expiration timeout for the cache plugin data env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}] ini: - {key: fact_caching_timeout, section: defaults} type: integer yaml: {key: facts.cache.timeout} COLLECTIONS_SCAN_SYS_PATH: name: enable/disable scanning sys.path for installed collections default: true type: boolean env: - {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH} ini: - {key: collections_scan_sys_path, section: defaults} COLLECTIONS_PATHS: name: ordered list of root paths for loading installed Ansible collections content description: > Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``, and you want to add ``my.collection`` to that directory, it must be saved as ``~/.ansible/collections/ansible_collections/my/collection``. default: ~/.ansible/collections:/usr/share/ansible/collections type: pathspec env: - name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases. - name: ANSIBLE_COLLECTIONS_PATH version_added: '2.10' ini: - key: collections_paths section: defaults - key: collections_path section: defaults version_added: '2.10' COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH: name: Defines behavior when loading a collection that does not support the current Ansible version description: - When a collection is loaded that does not support the running Ansible version (via the collection metadata key `requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore` skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution. env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}] ini: [{key: collections_on_ansible_version_mismatch, section: defaults}] choices: [error, warning, ignore] default: warning _COLOR_DEFAULTS: &color name: placeholder for color settings' defaults choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal'] COLOR_CHANGED: <<: *color name: Color for 'changed' task status default: yellow description: Defines the color to use on 'Changed' task status env: [{name: ANSIBLE_COLOR_CHANGED}] ini: - {key: changed, section: colors} COLOR_CONSOLE_PROMPT: <<: *color name: "Color for ansible-console's prompt task status" default: white description: Defines the default color to use for ansible-console env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}] ini: - {key: console_prompt, section: colors} version_added: "2.7" COLOR_DEBUG: <<: *color name: Color for debug statements default: dark gray description: Defines the color to use when emitting debug messages env: [{name: ANSIBLE_COLOR_DEBUG}] ini: - {key: debug, section: colors} COLOR_DEPRECATE: <<: *color name: Color for deprecation messages default: purple description: Defines the color to use when emitting deprecation messages env: [{name: ANSIBLE_COLOR_DEPRECATE}] ini: - {key: deprecate, section: colors} COLOR_DIFF_ADD: <<: *color name: Color for diff added display default: green description: Defines the color to use when showing added lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_ADD}] ini: - {key: diff_add, section: colors} yaml: {key: display.colors.diff.add} COLOR_DIFF_LINES: <<: *color name: Color for diff lines display default: cyan description: Defines the color to use when showing diffs env: [{name: ANSIBLE_COLOR_DIFF_LINES}] ini: - {key: diff_lines, section: colors} COLOR_DIFF_REMOVE: <<: *color name: Color for diff removed display default: red description: Defines the color to use when showing removed lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}] ini: - {key: diff_remove, section: colors} COLOR_ERROR: <<: *color name: Color for error messages default: red description: Defines the color to use when emitting error messages env: [{name: ANSIBLE_COLOR_ERROR}] ini: - {key: error, section: colors} yaml: {key: colors.error} COLOR_HIGHLIGHT: <<: *color name: Color for highlighting default: white description: Defines the color to use for highlighting env: [{name: ANSIBLE_COLOR_HIGHLIGHT}] ini: - {key: highlight, section: colors} COLOR_OK: <<: *color name: Color for 'ok' task status default: green description: Defines the color to use when showing 'OK' task status env: [{name: ANSIBLE_COLOR_OK}] ini: - {key: ok, section: colors} COLOR_SKIP: <<: *color name: Color for 'skip' task status default: cyan description: Defines the color to use when showing 'Skipped' task status env: [{name: ANSIBLE_COLOR_SKIP}] ini: - {key: skip, section: colors} COLOR_UNREACHABLE: <<: *color name: Color for 'unreachable' host state default: bright red description: Defines the color to use on 'Unreachable' status env: [{name: ANSIBLE_COLOR_UNREACHABLE}] ini: - {key: unreachable, section: colors} COLOR_VERBOSE: <<: *color name: Color for verbose messages default: blue description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's. env: [{name: ANSIBLE_COLOR_VERBOSE}] ini: - {key: verbose, section: colors} COLOR_WARN: <<: *color name: Color for warning messages default: bright purple description: Defines the color to use when emitting warning messages env: [{name: ANSIBLE_COLOR_WARN}] ini: - {key: warn, section: colors} COVERAGE_REMOTE_OUTPUT: name: Sets the output directory and filename prefix to generate coverage run info. description: - Sets the output directory on the remote host to generate coverage reports to. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. env: - {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT} vars: - {name: _ansible_coverage_remote_output} type: str version_added: '2.9' COVERAGE_REMOTE_PATHS: name: Sets the list of paths to run coverage for. description: - A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. - Only files that match the path glob will have its coverage collected. - Multiple path globs can be specified and are separated by ``:``. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. default: '*' env: - {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER} type: str version_added: '2.9' ACTION_WARNINGS: name: Toggle action warnings default: True description: - By default Ansible will issue a warning when received from a task action (module or action plugin) - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_ACTION_WARNINGS}] ini: - {key: action_warnings, section: defaults} type: boolean version_added: "2.5" COMMAND_WARNINGS: name: Command module warnings default: False description: - Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module. - These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``. - As of version 2.11, this is disabled by default. env: [{name: ANSIBLE_COMMAND_WARNINGS}] ini: - {key: command_warnings, section: defaults} type: boolean version_added: "1.8" deprecated: why: the command warnings feature is being removed version: "2.14" LOCALHOST_WARNING: name: Warning when using implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when there are no hosts in the inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_LOCALHOST_WARNING}] ini: - {key: localhost_warning, section: defaults} type: boolean version_added: "2.6" DOC_FRAGMENT_PLUGIN_PATH: name: documentation fragment plugins path default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins. env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}] ini: - {key: doc_fragment_plugins, section: defaults} type: pathspec DEFAULT_ACTION_PLUGIN_PATH: name: Action plugins path default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action description: Colon separated paths in which Ansible will search for Action Plugins. env: [{name: ANSIBLE_ACTION_PLUGINS}] ini: - {key: action_plugins, section: defaults} type: pathspec yaml: {key: plugins.action.path} DEFAULT_ALLOW_UNSAFE_LOOKUPS: name: Allow unsafe lookups default: False description: - "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo) to return data that is not marked 'unsafe'." - By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backwards-compatibility, however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late env: [] ini: - {key: allow_unsafe_lookups, section: defaults} type: boolean version_added: "2.2.3" DEFAULT_ASK_PASS: name: Ask for the login password default: False description: - This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting. env: [{name: ANSIBLE_ASK_PASS}] ini: - {key: ask_pass, section: defaults} type: boolean yaml: {key: defaults.ask_pass} DEFAULT_ASK_VAULT_PASS: name: Ask for the vault password(s) default: False description: - This controls whether an Ansible playbook should prompt for a vault password. env: [{name: ANSIBLE_ASK_VAULT_PASS}] ini: - {key: ask_vault_pass, section: defaults} type: boolean DEFAULT_BECOME: name: Enable privilege escalation (become) default: False description: Toggles the use of privilege escalation, allowing you to 'become' another user after login. env: [{name: ANSIBLE_BECOME}] ini: - {key: become, section: privilege_escalation} type: boolean DEFAULT_BECOME_ASK_PASS: name: Ask for the privilege escalation (become) password default: False description: Toggle to prompt for privilege escalation password. env: [{name: ANSIBLE_BECOME_ASK_PASS}] ini: - {key: become_ask_pass, section: privilege_escalation} type: boolean DEFAULT_BECOME_METHOD: name: Choose privilege escalation method default: 'sudo' description: Privilege escalation method to use when `become` is enabled. env: [{name: ANSIBLE_BECOME_METHOD}] ini: - {section: privilege_escalation, key: become_method} DEFAULT_BECOME_EXE: name: Choose 'become' executable default: ~ description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH' env: [{name: ANSIBLE_BECOME_EXE}] ini: - {key: become_exe, section: privilege_escalation} DEFAULT_BECOME_FLAGS: name: Set 'become' executable options default: '' description: Flags to pass to the privilege escalation executable. env: [{name: ANSIBLE_BECOME_FLAGS}] ini: - {key: become_flags, section: privilege_escalation} BECOME_PLUGIN_PATH: name: Become plugins path default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become description: Colon separated paths in which Ansible will search for Become Plugins. env: [{name: ANSIBLE_BECOME_PLUGINS}] ini: - {key: become_plugins, section: defaults} type: pathspec version_added: "2.8" DEFAULT_BECOME_USER: # FIXME: should really be blank and make -u passing optional depending on it name: Set the user you 'become' via privilege escalation default: root description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified. env: [{name: ANSIBLE_BECOME_USER}] ini: - {key: become_user, section: privilege_escalation} yaml: {key: become.user} DEFAULT_CACHE_PLUGIN_PATH: name: Cache Plugins Path default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache description: Colon separated paths in which Ansible will search for Cache Plugins. env: [{name: ANSIBLE_CACHE_PLUGINS}] ini: - {key: cache_plugins, section: defaults} type: pathspec CALLABLE_ACCEPT_LIST: name: Template 'callable' accept list default: [] description: Whitelist of callable methods to be made available to template evaluation env: - name: ANSIBLE_CALLABLE_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_CALLABLE_ENABLED' - name: ANSIBLE_CALLABLE_ENABLED version_added: '2.11' ini: - key: callable_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'callable_enabled' - key: callable_enabled section: defaults version_added: '2.11' type: list CONTROLLER_PYTHON_WARNING: name: Running Older than Python 3.8 Warning default: True description: Toggle to control showing warnings related to running a Python version older than Python 3.8 on the controller env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}] ini: - {key: controller_python_warning, section: defaults} type: boolean DEFAULT_CALLBACK_PLUGIN_PATH: name: Callback Plugins Path default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback description: Colon separated paths in which Ansible will search for Callback Plugins. env: [{name: ANSIBLE_CALLBACK_PLUGINS}] ini: - {key: callback_plugins, section: defaults} type: pathspec yaml: {key: plugins.callback.path} CALLBACKS_ENABLED: name: Enable callback plugins that require it. default: [] description: - "List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don't want them activated by default." env: - name: ANSIBLE_CALLBACK_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_CALLBACKS_ENABLED' - name: ANSIBLE_CALLBACKS_ENABLED version_added: '2.11' ini: - key: callback_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'callback_enabled' - key: callbacks_enabled section: defaults version_added: '2.11' type: list DEFAULT_CLICONF_PLUGIN_PATH: name: Cliconf Plugins Path default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf description: Colon separated paths in which Ansible will search for Cliconf Plugins. env: [{name: ANSIBLE_CLICONF_PLUGINS}] ini: - {key: cliconf_plugins, section: defaults} type: pathspec DEFAULT_CONNECTION_PLUGIN_PATH: name: Connection Plugins Path default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection description: Colon separated paths in which Ansible will search for Connection Plugins. env: [{name: ANSIBLE_CONNECTION_PLUGINS}] ini: - {key: connection_plugins, section: defaults} type: pathspec yaml: {key: plugins.connection.path} DEFAULT_DEBUG: name: Debug mode default: False description: - "Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no_log settings being enabled, which means debug mode should not be used in production." env: [{name: ANSIBLE_DEBUG}] ini: - {key: debug, section: defaults} type: boolean DEFAULT_EXECUTABLE: name: Target shell executable default: /bin/sh description: - "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is." env: [{name: ANSIBLE_EXECUTABLE}] ini: - {key: executable, section: defaults} DEFAULT_FACT_PATH: name: local fact path default: ~ description: - "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering." - "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``." - "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module." env: [{name: ANSIBLE_FACT_PATH}] ini: - {key: fact_path, section: defaults} type: string yaml: {key: facts.gathering.fact_path} DEFAULT_FILTER_PLUGIN_PATH: name: Jinja2 Filter Plugins Path default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins. env: [{name: ANSIBLE_FILTER_PLUGINS}] ini: - {key: filter_plugins, section: defaults} type: pathspec DEFAULT_FORCE_HANDLERS: name: Force handlers to run after failure default: False description: - This option controls if notified handlers run on a host even if a failure occurs on that host. - When false, the handlers will not run if a failure has occurred on a host. - This can also be set per play or on the command line. See Handlers and Failure for more details. env: [{name: ANSIBLE_FORCE_HANDLERS}] ini: - {key: force_handlers, section: defaults} type: boolean version_added: "1.9.1" DEFAULT_FORKS: name: Number of task forks default: 5 description: Maximum number of forks Ansible will use to execute tasks on target hosts. env: [{name: ANSIBLE_FORKS}] ini: - {key: forks, section: defaults} type: integer DEFAULT_GATHERING: name: Gathering behaviour default: 'implicit' description: - This setting controls the default policy of fact gathering (facts discovered about remote systems). - "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set." - "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play." - "The 'smart' value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run." - "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin." env: [{name: ANSIBLE_GATHERING}] ini: - key: gathering section: defaults version_added: "1.6" choices: ['smart', 'explicit', 'implicit'] DEFAULT_GATHER_SUBSET: name: Gather facts subset default: ['all'] description: - Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering. See the module documentation for specifics. - "It does **not** apply to user defined M(ansible.builtin.setup) tasks." env: [{name: ANSIBLE_GATHER_SUBSET}] ini: - key: gather_subset section: defaults version_added: "2.1" type: list DEFAULT_GATHER_TIMEOUT: name: Gather facts timeout default: 10 description: - Set the timeout in seconds for the implicit fact gathering. - "It does **not** apply to user defined M(ansible.builtin.setup) tasks." env: [{name: ANSIBLE_GATHER_TIMEOUT}] ini: - {key: gather_timeout, section: defaults} type: integer yaml: {key: defaults.gather_timeout} DEFAULT_HANDLER_INCLUDES_STATIC: name: Make handler M(ansible.builtin.include) static default: False description: - "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'." env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}] ini: - {key: handler_includes_static, section: defaults} type: boolean deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: none as its already built into the decision between include_tasks and import_tasks DEFAULT_HASH_BEHAVIOUR: name: Hash merge behaviour default: replace type: string choices: replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins). merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources. description: - This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible. - This does not affect variables whose values are scalars (integers, strings) or arrays. - "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable, leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it." - We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much complexity has been introduced into the data structures and plays. - For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars`` that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope, but the setting itself affects all sources and makes debugging even harder. - All playbooks and roles in the official examples repos assume the default for this setting. - Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables. For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file. - The Ansible project recommends you **avoid ``merge`` for new projects.** - It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it. New projects should **avoid 'merge'**. env: [{name: ANSIBLE_HASH_BEHAVIOUR}] ini: - {key: hash_behaviour, section: defaults} DEFAULT_HOST_LIST: name: Inventory Source default: /etc/ansible/hosts description: Comma separated list of Ansible inventory sources env: - name: ANSIBLE_INVENTORY expand_relative_paths: True ini: - key: inventory section: defaults type: pathlist yaml: {key: defaults.inventory} DEFAULT_HTTPAPI_PLUGIN_PATH: name: HttpApi Plugins Path default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi description: Colon separated paths in which Ansible will search for HttpApi Plugins. env: [{name: ANSIBLE_HTTPAPI_PLUGINS}] ini: - {key: httpapi_plugins, section: defaults} type: pathspec DEFAULT_INTERNAL_POLL_INTERVAL: name: Internal poll interval default: 0.001 env: [] ini: - {key: internal_poll_interval, section: defaults} type: float version_added: "2.2" description: - This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. - "The default corresponds to the value hardcoded in Ansible <= 2.1" DEFAULT_INVENTORY_PLUGIN_PATH: name: Inventory Plugins Path default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory description: Colon separated paths in which Ansible will search for Inventory Plugins. env: [{name: ANSIBLE_INVENTORY_PLUGINS}] ini: - {key: inventory_plugins, section: defaults} type: pathspec DEFAULT_JINJA2_EXTENSIONS: name: Enabled Jinja2 extensions default: [] description: - This is a developer-specific feature that allows enabling additional Jinja2 extensions. - "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)" env: [{name: ANSIBLE_JINJA2_EXTENSIONS}] ini: - {key: jinja2_extensions, section: defaults} DEFAULT_JINJA2_NATIVE: name: Use Jinja2's NativeEnvironment for templating default: False description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10. env: [{name: ANSIBLE_JINJA2_NATIVE}] ini: - {key: jinja2_native, section: defaults} type: boolean yaml: {key: jinja2_native} version_added: 2.7 DEFAULT_KEEP_REMOTE_FILES: name: Keep remote files default: False description: - Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. - If this option is enabled it will disable ``ANSIBLE_PIPELINING``. env: [{name: ANSIBLE_KEEP_REMOTE_FILES}] ini: - {key: keep_remote_files, section: defaults} type: boolean DEFAULT_LIBVIRT_LXC_NOSECLABEL: # TODO: move to plugin name: No security label on Lxc default: False description: - "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh. This is necessary when running on systems which do not have SELinux." env: - name: LIBVIRT_LXC_NOSECLABEL deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable - name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL ini: - {key: libvirt_lxc_noseclabel, section: selinux} type: boolean version_added: "2.1" DEFAULT_LOAD_CALLBACK_PLUGINS: name: Load callbacks for adhoc default: False description: - Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for ``ansible-playbook``. env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}] ini: - {key: bin_ansible_callbacks, section: defaults} type: boolean version_added: "1.8" DEFAULT_LOCAL_TMP: name: Controller temporary directory default: ~/.ansible/tmp description: Temporary directory for Ansible to use on the controller. env: [{name: ANSIBLE_LOCAL_TEMP}] ini: - {key: local_tmp, section: defaults} type: tmppath DEFAULT_LOG_PATH: name: Ansible log file path default: ~ description: File to which Ansible will log on the controller. When empty logging is disabled. env: [{name: ANSIBLE_LOG_PATH}] ini: - {key: log_path, section: defaults} type: path DEFAULT_LOG_FILTER: name: Name filters for python logger default: [] description: List of logger names to filter out of the log file env: [{name: ANSIBLE_LOG_FILTER}] ini: - {key: log_filter, section: defaults} type: list DEFAULT_LOOKUP_PLUGIN_PATH: name: Lookup Plugins Path description: Colon separated paths in which Ansible will search for Lookup Plugins. default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup env: [{name: ANSIBLE_LOOKUP_PLUGINS}] ini: - {key: lookup_plugins, section: defaults} type: pathspec yaml: {key: defaults.lookup_plugins} DEFAULT_MANAGED_STR: name: Ansible managed default: 'Ansible managed' description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules. env: [] ini: - {key: ansible_managed, section: defaults} yaml: {key: defaults.ansible_managed} DEFAULT_MODULE_ARGS: name: Adhoc default arguments default: '' description: - This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified. env: [{name: ANSIBLE_MODULE_ARGS}] ini: - {key: module_args, section: defaults} DEFAULT_MODULE_COMPRESSION: name: Python module compression default: ZIP_DEFLATED description: Compression scheme to use when transferring Python modules to the target. env: [] ini: - {key: module_compression, section: defaults} # vars: # - name: ansible_module_compression DEFAULT_MODULE_NAME: name: Default adhoc module default: command description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``." env: [] ini: - {key: module_name, section: defaults} DEFAULT_MODULE_PATH: name: Modules Path description: Colon separated paths in which Ansible will search for Modules. default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules env: [{name: ANSIBLE_LIBRARY}] ini: - {key: library, section: defaults} type: pathspec DEFAULT_MODULE_UTILS_PATH: name: Module Utils Path description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules. default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils env: [{name: ANSIBLE_MODULE_UTILS}] ini: - {key: module_utils, section: defaults} type: pathspec DEFAULT_NETCONF_PLUGIN_PATH: name: Netconf Plugins Path default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf description: Colon separated paths in which Ansible will search for Netconf Plugins. env: [{name: ANSIBLE_NETCONF_PLUGINS}] ini: - {key: netconf_plugins, section: defaults} type: pathspec DEFAULT_NO_LOG: name: No log default: False description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures." env: [{name: ANSIBLE_NO_LOG}] ini: - {key: no_log, section: defaults} type: boolean DEFAULT_NO_TARGET_SYSLOG: name: No syslog on target default: False description: - Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writting to the event log. env: [{name: ANSIBLE_NO_TARGET_SYSLOG}] ini: - {key: no_target_syslog, section: defaults} vars: - name: ansible_no_target_syslog version_added: '2.10' type: boolean yaml: {key: defaults.no_target_syslog} DEFAULT_NULL_REPRESENTATION: name: Represent a null default: ~ description: What templating should return as a 'null' value. When not set it will let Jinja2 decide. env: [{name: ANSIBLE_NULL_REPRESENTATION}] ini: - {key: null_representation, section: defaults} type: none DEFAULT_POLL_INTERVAL: name: Async poll interval default: 15 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed. env: [{name: ANSIBLE_POLL_INTERVAL}] ini: - {key: poll_interval, section: defaults} type: integer DEFAULT_PRIVATE_KEY_FILE: name: Private key file default: ~ description: - Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation. env: [{name: ANSIBLE_PRIVATE_KEY_FILE}] ini: - {key: private_key_file, section: defaults} type: path DEFAULT_PRIVATE_ROLE_VARS: name: Private role variables default: False description: - Makes role variables inaccessible from other roles. - This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook. env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}] ini: - {key: private_role_vars, section: defaults} type: boolean yaml: {key: defaults.private_role_vars} DEFAULT_REMOTE_PORT: name: Remote port default: ~ description: Port to use in remote connections, when blank it will use the connection plugin default. env: [{name: ANSIBLE_REMOTE_PORT}] ini: - {key: remote_port, section: defaults} type: integer yaml: {key: defaults.remote_port} DEFAULT_REMOTE_USER: name: Login/Remote User default: description: - Sets the login user for the target machines - "When blank it uses the connection plugin's default, normally the user currently executing Ansible." env: [{name: ANSIBLE_REMOTE_USER}] ini: - {key: remote_user, section: defaults} DEFAULT_ROLES_PATH: name: Roles path default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles description: Colon separated paths in which Ansible will search for Roles. env: [{name: ANSIBLE_ROLES_PATH}] expand_relative_paths: True ini: - {key: roles_path, section: defaults} type: pathspec yaml: {key: defaults.roles_path} DEFAULT_SELINUX_SPECIAL_FS: name: Problematic file systems default: fuse, nfs, vboxsf, ramfs, 9p, vfat description: - "Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors." - Data corruption may occur and writes are not always verified when a filesystem is in the list. env: - name: ANSIBLE_SELINUX_SPECIAL_FS version_added: "2.9" ini: - {key: special_context_filesystems, section: selinux} type: list DEFAULT_STDOUT_CALLBACK: name: Main display callback plugin default: default description: - "Set the main callback used to display Ansible output, you can only have one at a time." - You can have many other callbacks, but just one can be in charge of stdout. env: [{name: ANSIBLE_STDOUT_CALLBACK}] ini: - {key: stdout_callback, section: defaults} ENABLE_TASK_DEBUGGER: name: Whether to enable the task debugger default: False description: - Whether or not to enable the task debugger, this previously was done as a strategy plugin. - Now all strategy plugins can inherit this behavior. The debugger defaults to activating when - a task is failed on unreachable. Use the debugger keyword for more flexibility. type: boolean env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}] ini: - {key: enable_task_debugger, section: defaults} version_added: "2.5" TASK_DEBUGGER_IGNORE_ERRORS: name: Whether a failed task with ignore_errors=True will still invoke the debugger default: True description: - This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True is specified. - True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors. type: boolean env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}] ini: - {key: task_debugger_ignore_errors, section: defaults} version_added: "2.7" DEFAULT_STRATEGY: name: Implied strategy default: 'linear' description: Set the default strategy used for plays. env: [{name: ANSIBLE_STRATEGY}] ini: - {key: strategy, section: defaults} version_added: "2.3" DEFAULT_STRATEGY_PLUGIN_PATH: name: Strategy Plugins Path description: Colon separated paths in which Ansible will search for Strategy Plugins. default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy env: [{name: ANSIBLE_STRATEGY_PLUGINS}] ini: - {key: strategy_plugins, section: defaults} type: pathspec DEFAULT_SU: default: False description: 'Toggle the use of "su" for tasks.' env: [{name: ANSIBLE_SU}] ini: - {key: su, section: defaults} type: boolean yaml: {key: defaults.su} DEFAULT_SYSLOG_FACILITY: name: syslog facility default: LOG_USER description: Syslog facility to use when Ansible logs to the remote target env: [{name: ANSIBLE_SYSLOG_FACILITY}] ini: - {key: syslog_facility, section: defaults} DEFAULT_TASK_INCLUDES_STATIC: name: Task include static default: False description: - The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task. env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}] ini: - {key: task_includes_static, section: defaults} type: boolean version_added: "2.1" deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: None, as its already built into the decision between include_tasks and import_tasks DEFAULT_TERMINAL_PLUGIN_PATH: name: Terminal Plugins Path default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal description: Colon separated paths in which Ansible will search for Terminal Plugins. env: [{name: ANSIBLE_TERMINAL_PLUGINS}] ini: - {key: terminal_plugins, section: defaults} type: pathspec DEFAULT_TEST_PLUGIN_PATH: name: Jinja2 Test Plugins Path description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins. default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test env: [{name: ANSIBLE_TEST_PLUGINS}] ini: - {key: test_plugins, section: defaults} type: pathspec DEFAULT_TIMEOUT: name: Connection timeout default: 10 description: This is the default timeout for connection plugins to use. env: [{name: ANSIBLE_TIMEOUT}] ini: - {key: timeout, section: defaults} type: integer DEFAULT_TRANSPORT: # note that ssh_utils refs this and needs to be updated if removed name: Connection plugin default: smart description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions" env: [{name: ANSIBLE_TRANSPORT}] ini: - {key: transport, section: defaults} DEFAULT_UNDEFINED_VAR_BEHAVIOR: name: Jinja2 fail on undefined default: True version_added: "1.3" description: - When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. - "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written." env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}] ini: - {key: error_on_undefined_vars, section: defaults} type: boolean DEFAULT_VARS_PLUGIN_PATH: name: Vars Plugins Path default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars description: Colon separated paths in which Ansible will search for Vars Plugins. env: [{name: ANSIBLE_VARS_PLUGINS}] ini: - {key: vars_plugins, section: defaults} type: pathspec # TODO: unused? #DEFAULT_VAR_COMPRESSION_LEVEL: # default: 0 # description: 'TODO: write it' # env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}] # ini: # - {key: var_compression_level, section: defaults} # type: integer # yaml: {key: defaults.var_compression_level} DEFAULT_VAULT_ID_MATCH: name: Force vault id match default: False description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id' env: [{name: ANSIBLE_VAULT_ID_MATCH}] ini: - {key: vault_id_match, section: defaults} yaml: {key: defaults.vault_id_match} DEFAULT_VAULT_IDENTITY: name: Vault id label default: default description: 'The label to use for the default vault id label in cases where a vault id label is not provided' env: [{name: ANSIBLE_VAULT_IDENTITY}] ini: - {key: vault_identity, section: defaults} yaml: {key: defaults.vault_identity} DEFAULT_VAULT_ENCRYPT_IDENTITY: name: Vault id to use for encryption default: description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.' env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}] ini: - {key: vault_encrypt_identity, section: defaults} yaml: {key: defaults.vault_encrypt_identity} DEFAULT_VAULT_IDENTITY_LIST: name: Default vault ids default: [] description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.' env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}] ini: - {key: vault_identity_list, section: defaults} type: list yaml: {key: defaults.vault_identity_list} DEFAULT_VAULT_PASSWORD_FILE: name: Vault password file default: ~ description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id' env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}] ini: - {key: vault_password_file, section: defaults} type: path yaml: {key: defaults.vault_password_file} DEFAULT_VERBOSITY: name: Verbosity default: 0 description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line. env: [{name: ANSIBLE_VERBOSITY}] ini: - {key: verbosity, section: defaults} type: integer DEPRECATION_WARNINGS: name: Deprecation messages default: True description: "Toggle to control the showing of deprecation warnings" env: [{name: ANSIBLE_DEPRECATION_WARNINGS}] ini: - {key: deprecation_warnings, section: defaults} type: boolean DEVEL_WARNING: name: Running devel warning default: True description: Toggle to control showing warnings related to running devel env: [{name: ANSIBLE_DEVEL_WARNING}] ini: - {key: devel_warning, section: defaults} type: boolean DIFF_ALWAYS: name: Show differences default: False description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``. env: [{name: ANSIBLE_DIFF_ALWAYS}] ini: - {key: always, section: diff} type: bool DIFF_CONTEXT: name: Difference context default: 3 description: How many lines of context to show when displaying the differences between files. env: [{name: ANSIBLE_DIFF_CONTEXT}] ini: - {key: context, section: diff} type: integer DISPLAY_ARGS_TO_STDOUT: name: Show task arguments default: False description: - "Normally ``ansible-playbook`` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header." - "This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed." - "If you set this to True you should be sure that you have secured your environment's stdout (no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information." env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}] ini: - {key: display_args_to_stdout, section: defaults} type: boolean version_added: "2.1" DISPLAY_SKIPPED_HOSTS: name: Show skipped results default: True description: "Toggle to control displaying skipped task/host entries in a task in the default callback" env: - name: DISPLAY_SKIPPED_HOSTS deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable - name: ANSIBLE_DISPLAY_SKIPPED_HOSTS ini: - {key: display_skipped_hosts, section: defaults} type: boolean DOCSITE_ROOT_URL: name: Root docsite URL default: https://docs.ansible.com/ansible/ description: Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash. ini: - {key: docsite_root_url, section: defaults} version_added: "2.8" DUPLICATE_YAML_DICT_KEY: name: Controls ansible behaviour when finding duplicate keys in YAML. default: warn description: - By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}] ini: - {key: duplicate_dict_key, section: defaults} type: string choices: ['warn', 'error', 'ignore'] version_added: "2.9" ERROR_ON_MISSING_HANDLER: name: Missing handler error default: True description: "Toggle to allow missing handlers to become a warning instead of an error when notifying." env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}] ini: - {key: error_on_missing_handler, section: defaults} type: boolean CONNECTION_FACTS_MODULES: name: Map of connections to fact modules default: # use ansible.legacy names on unqualified facts modules to allow library/ overrides asa: ansible.legacy.asa_facts cisco.asa.asa: cisco.asa.asa_facts eos: ansible.legacy.eos_facts arista.eos.eos: arista.eos.eos_facts frr: ansible.legacy.frr_facts frr.frr.frr: frr.frr.frr_facts ios: ansible.legacy.ios_facts cisco.ios.ios: cisco.ios.ios_facts iosxr: ansible.legacy.iosxr_facts cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts junos: ansible.legacy.junos_facts junipernetworks.junos.junos: junipernetworks.junos.junos_facts nxos: ansible.legacy.nxos_facts cisco.nxos.nxos: cisco.nxos.nxos_facts vyos: ansible.legacy.vyos_facts vyos.vyos.vyos: vyos.vyos.vyos_facts exos: ansible.legacy.exos_facts extreme.exos.exos: extreme.exos.exos_facts slxos: ansible.legacy.slxos_facts extreme.slxos.slxos: extreme.slxos.slxos_facts voss: ansible.legacy.voss_facts extreme.voss.voss: extreme.voss.voss_facts ironware: ansible.legacy.ironware_facts community.network.ironware: community.network.ironware_facts description: "Which modules to run during a play's fact gathering stage based on connection" type: dict FACTS_MODULES: name: Gather Facts Modules default: - smart description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type." env: [{name: ANSIBLE_FACTS_MODULES}] ini: - {key: facts_modules, section: defaults} type: list vars: - name: ansible_facts_modules GALAXY_IGNORE_CERTS: name: Galaxy validate certs default: False description: - If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate. env: [{name: ANSIBLE_GALAXY_IGNORE}] ini: - {key: ignore_certs, section: galaxy} type: boolean GALAXY_ROLE_SKELETON: name: Galaxy role or collection skeleton directory default: description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``. env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}] ini: - {key: role_skeleton, section: galaxy} type: path GALAXY_ROLE_SKELETON_IGNORE: name: Galaxy skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy role or collection skeleton directory env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}] ini: - {key: role_skeleton_ignore, section: galaxy} type: list # TODO: unused? #GALAXY_SCMS: # name: Galaxy SCMS # default: git, hg # description: Available galaxy source control management systems. # env: [{name: ANSIBLE_GALAXY_SCMS}] # ini: # - {key: scms, section: galaxy} # type: list GALAXY_SERVER: default: https://galaxy.ansible.com description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source." env: [{name: ANSIBLE_GALAXY_SERVER}] ini: - {key: server, section: galaxy} yaml: {key: galaxy.server} GALAXY_SERVER_LIST: description: - A list of Galaxy servers to use when installing a collection. - The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details. - 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.' - The order of servers in this list is used to as the order in which a collection is resolved. - Setting this config option will ignore the :ref:`galaxy_server` config option. env: [{name: ANSIBLE_GALAXY_SERVER_LIST}] ini: - {key: server_list, section: galaxy} type: list version_added: "2.9" GALAXY_TOKEN_PATH: default: ~/.ansible/galaxy_token description: "Local path to galaxy access token file" env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}] ini: - {key: token_path, section: galaxy} type: path version_added: "2.9" GALAXY_DISPLAY_PROGRESS: default: ~ description: - Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file. - This config option controls whether the display wheel is shown or not. - The default is to show the display wheel if stdout has a tty. env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}] ini: - {key: display_progress, section: galaxy} type: bool version_added: "2.10" GALAXY_CACHE_DIR: default: ~/.ansible/galaxy_cache description: - The directory that stores cached responses from a Galaxy server. - This is only used by the ``ansible-galaxy collection install`` and ``download`` commands. - Cache files inside this dir will be ignored if they are world writable. env: - name: ANSIBLE_GALAXY_CACHE_DIR ini: - section: galaxy key: cache_dir type: path version_added: '2.11' HOST_KEY_CHECKING: # note: constant not in use by ssh plugin anymore # TODO: check non ssh connection plugins for use/migration name: Check host keys default: True description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' env: [{name: ANSIBLE_HOST_KEY_CHECKING}] ini: - {key: host_key_checking, section: defaults} type: boolean HOST_PATTERN_MISMATCH: name: Control host pattern mismatch behaviour default: 'warning' description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}] ini: - {key: host_pattern_mismatch, section: inventory} choices: ['warning', 'error', 'ignore'] version_added: "2.8" INTERPRETER_PYTHON: name: Python interpreter path (or automatic discovery behavior) used for module execution default: auto_legacy env: [{name: ANSIBLE_PYTHON_INTERPRETER}] ini: - {key: interpreter_python, section: defaults} vars: - {name: ansible_python_interpreter} version_added: "2.8" description: - Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the default behavior will change to that of ``auto`` in a future Ansible release. INTERPRETER_PYTHON_DISTRO_MAP: name: Mapping of known included platform pythons for various Linux distros default: centos: &rhelish '6': /usr/bin/python '8': /usr/libexec/platform-python debian: '8': /usr/bin/python '10': /usr/bin/python3 fedora: '23': /usr/bin/python3 oracle: *rhelish redhat: *rhelish rhel: *rhelish ubuntu: '14': /usr/bin/python '16': /usr/bin/python3 version_added: "2.8" # FUTURE: add inventory override once we're sure it can't be abused by a rogue target # FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc? INTERPRETER_PYTHON_FALLBACK: name: Ordered list of Python interpreters to check for in discovery default: - /usr/bin/python - python3.9 - python3.8 - python3.7 - python3.6 - python3.5 - python2.7 - python2.6 - /usr/libexec/platform-python - /usr/bin/python3 - python # FUTURE: add inventory override once we're sure it can't be abused by a rogue target version_added: "2.8" TRANSFORM_INVALID_GROUP_CHARS: name: Transform invalid characters in group names default: 'never' description: - Make ansible transform invalid characters in group names supplied by inventory sources. - If 'never' it will allow for the group name but warn about the issue. - When 'ignore', it does the same as 'never', without issuing a warning. - When 'always' it will replace any invalid characters with '_' (underscore) and warn the user - When 'silently', it does the same as 'always', without issuing a warning. env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}] ini: - {key: force_valid_group_names, section: defaults} type: string choices: ['always', 'never', 'ignore', 'silently'] version_added: '2.8' INVALID_TASK_ATTRIBUTE_FAILED: name: Controls whether invalid attributes for a task result in errors instead of warnings default: True description: If 'false', invalid attributes for a task will result in warnings instead of errors type: boolean env: - name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED ini: - key: invalid_task_attribute_failed section: defaults version_added: "2.7" INVENTORY_ANY_UNPARSED_IS_FAILED: name: Controls whether any unparseable inventory source is a fatal error default: False description: > If 'true', it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning. type: boolean env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}] ini: - {key: any_unparsed_is_failed, section: inventory} version_added: "2.7" INVENTORY_CACHE_ENABLED: name: Inventory caching enabled default: False description: Toggle to turn on inventory caching env: [{name: ANSIBLE_INVENTORY_CACHE}] ini: - {key: cache, section: inventory} type: bool INVENTORY_CACHE_PLUGIN: name: Inventory cache plugin description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}] ini: - {key: cache_plugin, section: inventory} INVENTORY_CACHE_PLUGIN_CONNECTION: name: Inventory cache plugin URI to override the defaults section description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}] ini: - {key: cache_connection, section: inventory} INVENTORY_CACHE_PLUGIN_PREFIX: name: Inventory cache plugin table prefix description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}] default: ansible_facts ini: - {key: cache_prefix, section: inventory} INVENTORY_CACHE_TIMEOUT: name: Inventory cache plugin expiration timeout description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead. default: 3600 env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}] ini: - {key: cache_timeout, section: inventory} INVENTORY_ENABLED: name: Active Inventory plugins default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml'] description: List of enabled inventory plugins, it also determines the order in which they are used. env: [{name: ANSIBLE_INVENTORY_ENABLED}] ini: - {key: enable_plugins, section: inventory} type: list INVENTORY_EXPORT: name: Set ansible-inventory into export mode default: False description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting. env: [{name: ANSIBLE_INVENTORY_EXPORT}] ini: - {key: export, section: inventory} type: bool INVENTORY_IGNORE_EXTS: name: Inventory ignore extensions default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}" description: List of extensions to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE}] ini: - {key: inventory_ignore_extensions, section: defaults} - {key: ignore_extensions, section: inventory} type: list INVENTORY_IGNORE_PATTERNS: name: Inventory ignore patterns default: [] description: List of patterns to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}] ini: - {key: inventory_ignore_patterns, section: defaults} - {key: ignore_patterns, section: inventory} type: list INVENTORY_UNPARSED_IS_FAILED: name: Unparsed Inventory failure default: False description: > If 'true' it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning. env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}] ini: - {key: unparsed_is_failed, section: inventory} type: bool MAX_FILE_SIZE_FOR_DIFF: name: Diff maximum file size default: 104448 description: Maximum size of files to be considered for diff display env: [{name: ANSIBLE_MAX_DIFF_SIZE}] ini: - {key: max_diff_size, section: defaults} type: int NETWORK_GROUP_MODULES: name: Network module families default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos] description: 'TODO: write it' env: - name: NETWORK_GROUP_MODULES deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable - name: ANSIBLE_NETWORK_GROUP_MODULES ini: - {key: network_group_modules, section: defaults} type: list yaml: {key: defaults.network_group_modules} INJECT_FACTS_AS_VARS: default: True description: - Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. - Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix. env: [{name: ANSIBLE_INJECT_FACT_VARS}] ini: - {key: inject_facts_as_vars, section: defaults} type: boolean version_added: "2.5" MODULE_IGNORE_EXTS: name: Module ignore extensions default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}" description: - List of extensions to ignore when looking for modules to load - This is for rejecting script and binary module fallback extensions env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}] ini: - {key: module_ignore_exts, section: defaults} type: list OLD_PLUGIN_CACHE_CLEARING: description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour. env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}] ini: - {key: old_plugin_cache_clear, section: defaults} type: boolean default: False version_added: "2.8" PARAMIKO_HOST_KEY_AUTO_ADD: # TODO: move to plugin default: False description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean PARAMIKO_LOOK_FOR_KEYS: name: look for keys default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean PERSISTENT_CONTROL_PATH_DIR: name: Persistence socket path default: ~/.ansible/pc description: Path to socket to be used by the connection persistence system. env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: persistent_connection} type: path PERSISTENT_CONNECT_TIMEOUT: name: Persistence timeout default: 30 description: This controls how long the persistent connection will remain idle before it is destroyed. env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}] ini: - {key: connect_timeout, section: persistent_connection} type: integer PERSISTENT_CONNECT_RETRY_TIMEOUT: name: Persistence connection retry timeout default: 15 description: This controls the retry timeout for persistent connection to connect to the local domain socket. env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}] ini: - {key: connect_retry_timeout, section: persistent_connection} type: integer PERSISTENT_COMMAND_TIMEOUT: name: Persistence command timeout default: 30 description: This controls the amount of time to wait for response from remote device before timing out persistent connection. env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}] ini: - {key: command_timeout, section: persistent_connection} type: int PLAYBOOK_DIR: name: playbook dir override for non-playbook CLIs (ala --playbook-dir) version_added: "2.9" description: - A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it. env: [{name: ANSIBLE_PLAYBOOK_DIR}] ini: [{key: playbook_dir, section: defaults}] type: path PLAYBOOK_VARS_ROOT: name: playbook vars files root default: top version_added: "2.4.1" description: - This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars - The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory. - The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory. - The ``all`` option examines from the first parent to the current playbook. env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}] ini: - {key: playbook_vars_root, section: defaults} choices: [ top, bottom, all ] PLUGIN_FILTERS_CFG: name: Config file for limiting valid plugins default: null version_added: "2.5.0" description: - "A path to configuration for filtering which plugins installed on the system are allowed to be used." - "See :ref:`plugin_filtering_config` for details of the filter file's format." - " The default is /etc/ansible/plugin_filters.yml" ini: - key: plugin_filters_cfg section: default deprecated: why: specifying "plugin_filters_cfg" under the "default" section is deprecated version: "2.12" alternatives: the "defaults" section instead - key: plugin_filters_cfg section: defaults type: path PYTHON_MODULE_RLIMIT_NOFILE: name: Adjust maximum file descriptor soft limit during Python module execution description: - Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits. default: 0 env: - {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE} ini: - {key: python_module_rlimit_nofile, section: defaults} vars: - {name: ansible_python_module_rlimit_nofile} version_added: '2.8' RETRY_FILES_ENABLED: name: Retry files default: False description: This controls whether a failed Ansible playbook should create a .retry file. env: [{name: ANSIBLE_RETRY_FILES_ENABLED}] ini: - {key: retry_files_enabled, section: defaults} type: bool RETRY_FILES_SAVE_PATH: name: Retry files path default: ~ description: - This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. - This file will be overwritten after each run with the list of failed hosts from all plays. env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}] ini: - {key: retry_files_save_path, section: defaults} type: path RUN_VARS_PLUGINS: name: When should vars plugins run relative to inventory default: demand description: - This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection. - Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks. - Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source. env: [{name: ANSIBLE_RUN_VARS_PLUGINS}] ini: - {key: run_vars_plugins, section: defaults} type: str choices: ['demand', 'start'] version_added: "2.10" SHOW_CUSTOM_STATS: name: Display custom stats default: False description: 'This adds the custom stats set via the set_stats plugin to the default output' env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}] ini: - {key: show_custom_stats, section: defaults} type: bool STRING_TYPE_FILTERS: name: Filters to preserve strings default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json] description: - "This list of filters avoids 'type conversion' when templating variables" - Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example. env: [{name: ANSIBLE_STRING_TYPE_FILTERS}] ini: - {key: dont_type_filters, section: jinja2} type: list SYSTEM_WARNINGS: name: System warnings default: True description: - Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) - These may include warnings about 3rd party packages or other conditions that should be resolved if possible. env: [{name: ANSIBLE_SYSTEM_WARNINGS}] ini: - {key: system_warnings, section: defaults} type: boolean TAGS_RUN: name: Run Tags default: [] type: list description: default list of tags to run in your plays, Skip Tags has precedence. env: [{name: ANSIBLE_RUN_TAGS}] ini: - {key: run, section: tags} version_added: "2.5" TAGS_SKIP: name: Skip Tags default: [] type: list description: default list of tags to skip in your plays, has precedence over Run Tags env: [{name: ANSIBLE_SKIP_TAGS}] ini: - {key: skip, section: tags} version_added: "2.5" TASK_TIMEOUT: name: Task Timeout default: 0 description: - Set the maximum time (in seconds) that a task can run for. - If set to 0 (the default) there is no timeout. env: [{name: ANSIBLE_TASK_TIMEOUT}] ini: - {key: task_timeout, section: defaults} type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_COUNT: name: Worker Shutdown Poll Count default: 0 description: - The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly. - After this limit is reached any worker processes still running will be terminated. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}] type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_DELAY: name: Worker Shutdown Poll Delay default: 0.1 description: - The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}] type: float version_added: '2.10' USE_PERSISTENT_CONNECTIONS: name: Persistence default: False description: Toggles the use of persistence for connections. env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}] ini: - {key: use_persistent_connections, section: defaults} type: boolean VARIABLE_PLUGINS_ENABLED: name: Vars plugin enabled list default: ['host_group_vars'] description: Whitelist for variable plugins that require it. env: [{name: ANSIBLE_VARS_ENABLED}] ini: - {key: vars_plugins_enabled, section: defaults} type: list version_added: "2.10" VARIABLE_PRECEDENCE: name: Group variable precedence default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play'] description: Allows to change the group variable precedence merge order. env: [{name: ANSIBLE_PRECEDENCE}] ini: - {key: precedence, section: defaults} type: list version_added: "2.4" WIN_ASYNC_STARTUP_TIMEOUT: name: Windows Async Startup Timeout default: 5 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load. - This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here. env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}] ini: - {key: win_async_startup_timeout, section: defaults} type: integer vars: - {name: ansible_win_async_startup_timeout} version_added: '2.10' YAML_FILENAME_EXTENSIONS: name: Valid YAML extensions default: [".yml", ".yaml", ".json"] description: - "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these." - 'This affects vars_files, include_vars, inventory and vars plugins among others.' env: - name: ANSIBLE_YAML_FILENAME_EXT ini: - section: defaults key: yaml_valid_extensions type: list NETCONF_SSH_CONFIG: description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings. env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}] ini: - {key: ssh_config, section: netconf_connection} yaml: {key: netconf_connection.ssh_config} default: null STRING_CONVERSION_ACTION: version_added: '2.8' description: - Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc. will be converted by the YAML parser unless fully quoted. - Valid options are 'error', 'warn', and 'ignore'. - Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12. default: 'warn' env: - name: ANSIBLE_STRING_CONVERSION_ACTION ini: - section: defaults key: string_conversion_action type: string VERBOSE_TO_STDERR: version_added: '2.8' description: - Force 'verbose' option to use stderr instead of stdout default: False env: - name: ANSIBLE_VERBOSE_TO_STDERR ini: - section: defaults key: verbose_to_stderr type: bool ...
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
changelogs/fragments/74256-set-theory-filters-behavior.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
docs/docsite/rst/porting_guides/porting_guide_core_2.12.rst
.. _porting_2.12_guide: ************************** Ansible 2.12 Porting Guide ************************** This section discusses the behavioral changes between Ansible 2.11 and Ansible 2.12. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible. We suggest you read this page along with `Ansible Changelog for 2.12 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.12.rst>`_ to understand what updates you may need to make. This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Topics Playbook ======== No notable changes Command Line ============ No notable changes Deprecated ========== * Python 2.6 on the target node is deprecated in this release. ``ansible-core`` 2.13 will remove support for Python 2.6. * Bare variables in conditionals: ``when`` conditionals no longer automatically parse string booleans such as ``"true"`` and ``"false"`` into actual booleans. Any variable containing a non-empty string is considered true. This was previously configurable with the ``CONDITIONAL_BARE_VARS`` configuration option (and the ``ANSIBLE_CONDITIONAL_BARE_VARS`` environment variable). This setting no longer has any effect. Users can work around the issue by using the ``|bool`` filter: .. code-block:: yaml vars: teardown: 'false' tasks: - include_tasks: teardown.yml when: teardown | bool - include_tasks: provision.yml when: not teardown | bool Modules ======= * ``cron`` now requires ``name`` to be specified in all cases. * ``cron`` no longer allows a ``reboot`` parameter. Use ``special_time: reboot`` instead. Modules removed --------------- The following modules no longer exist: * No notable changes Deprecation notices ------------------- No notable changes Noteworthy module changes ------------------------- No notable changes Plugins ======= No notable changes Porting custom scripts ====================== No notable changes Networking ========== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
lib/ansible/plugins/filter/mathstuff.py
# Copyright 2014, Brian Coca <[email protected]> # Copyright 2017, Ken Celenza <[email protected]> # Copyright 2017, Jason Edelman <[email protected]> # Copyright 2017, Ansible Project # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import itertools import math from jinja2.filters import environmentfilter from ansible.errors import AnsibleFilterError, AnsibleFilterTypeError from ansible.module_utils.common.text import formatters from ansible.module_utils.six import binary_type, text_type from ansible.module_utils.six.moves import zip, zip_longest from ansible.module_utils.common._collections_compat import Hashable, Mapping, Iterable from ansible.module_utils._text import to_native, to_text from ansible.utils.display import Display try: from jinja2.filters import do_unique HAS_UNIQUE = True except ImportError: HAS_UNIQUE = False try: from jinja2.filters import do_max, do_min HAS_MIN_MAX = True except ImportError: HAS_MIN_MAX = False display = Display() @environmentfilter def unique(environment, a, case_sensitive=False, attribute=None): def _do_fail(e): if case_sensitive or attribute: raise AnsibleFilterError("Jinja2's unique filter failed and we cannot fall back to Ansible's version " "as it does not support the parameters supplied", orig_exc=e) error = e = None try: if HAS_UNIQUE: c = list(do_unique(environment, a, case_sensitive=case_sensitive, attribute=attribute)) except TypeError as e: error = e _do_fail(e) except Exception as e: error = e _do_fail(e) display.warning('Falling back to Ansible unique filter as Jinja2 one failed: %s' % to_text(e)) if not HAS_UNIQUE or error: # handle Jinja2 specific attributes when using Ansible's version if case_sensitive or attribute: raise AnsibleFilterError("Ansible's unique filter does not support case_sensitive nor attribute parameters, " "you need a newer version of Jinja2 that provides their version of the filter.") c = [] for x in a: if x not in c: c.append(x) return c @environmentfilter def intersect(environment, a, b): if isinstance(a, Hashable) and isinstance(b, Hashable): c = set(a) & set(b) else: c = unique(environment, [x for x in a if x in b]) return c @environmentfilter def difference(environment, a, b): if isinstance(a, Hashable) and isinstance(b, Hashable): c = set(a) - set(b) else: c = unique(environment, [x for x in a if x not in b]) return c @environmentfilter def symmetric_difference(environment, a, b): if isinstance(a, Hashable) and isinstance(b, Hashable): c = set(a) ^ set(b) else: isect = intersect(environment, a, b) c = [x for x in union(environment, a, b) if x not in isect] return c @environmentfilter def union(environment, a, b): if isinstance(a, Hashable) and isinstance(b, Hashable): c = set(a) | set(b) else: c = unique(environment, a + b) return c @environmentfilter def min(environment, a, **kwargs): if HAS_MIN_MAX: return do_min(environment, a, **kwargs) else: if kwargs: raise AnsibleFilterError("Ansible's min filter does not support any keyword arguments. " "You need Jinja2 2.10 or later that provides their version of the filter.") _min = __builtins__.get('min') return _min(a) @environmentfilter def max(environment, a, **kwargs): if HAS_MIN_MAX: return do_max(environment, a, **kwargs) else: if kwargs: raise AnsibleFilterError("Ansible's max filter does not support any keyword arguments. " "You need Jinja2 2.10 or later that provides their version of the filter.") _max = __builtins__.get('max') return _max(a) def logarithm(x, base=math.e): try: if base == 10: return math.log10(x) else: return math.log(x, base) except TypeError as e: raise AnsibleFilterTypeError('log() can only be used on numbers: %s' % to_native(e)) def power(x, y): try: return math.pow(x, y) except TypeError as e: raise AnsibleFilterTypeError('pow() can only be used on numbers: %s' % to_native(e)) def inversepower(x, base=2): try: if base == 2: return math.sqrt(x) else: return math.pow(x, 1.0 / float(base)) except (ValueError, TypeError) as e: raise AnsibleFilterTypeError('root() can only be used on numbers: %s' % to_native(e)) def human_readable(size, isbits=False, unit=None): ''' Return a human readable string ''' try: return formatters.bytes_to_human(size, isbits, unit) except TypeError as e: raise AnsibleFilterTypeError("human_readable() failed on bad input: %s" % to_native(e)) except Exception: raise AnsibleFilterError("human_readable() can't interpret following string: %s" % size) def human_to_bytes(size, default_unit=None, isbits=False): ''' Return bytes count from a human readable string ''' try: return formatters.human_to_bytes(size, default_unit, isbits) except TypeError as e: raise AnsibleFilterTypeError("human_to_bytes() failed on bad input: %s" % to_native(e)) except Exception: raise AnsibleFilterError("human_to_bytes() can't interpret following string: %s" % size) def rekey_on_member(data, key, duplicates='error'): """ Rekey a dict of dicts on another member May also create a dict from a list of dicts. duplicates can be one of ``error`` or ``overwrite`` to specify whether to error out if the key value would be duplicated or to overwrite previous entries if that's the case. """ if duplicates not in ('error', 'overwrite'): raise AnsibleFilterError("duplicates parameter to rekey_on_member has unknown value: {0}".format(duplicates)) new_obj = {} if isinstance(data, Mapping): iterate_over = data.values() elif isinstance(data, Iterable) and not isinstance(data, (text_type, binary_type)): iterate_over = data else: raise AnsibleFilterTypeError("Type is not a valid list, set, or dict") for item in iterate_over: if not isinstance(item, Mapping): raise AnsibleFilterTypeError("List item is not a valid dict") try: key_elem = item[key] except KeyError: raise AnsibleFilterError("Key {0} was not found".format(key)) except TypeError as e: raise AnsibleFilterTypeError(to_native(e)) except Exception as e: raise AnsibleFilterError(to_native(e)) # Note: if new_obj[key_elem] exists it will always be a non-empty dict (it will at # minimum contain {key: key_elem} if new_obj.get(key_elem, None): if duplicates == 'error': raise AnsibleFilterError("Key {0} is not unique, cannot correctly turn into dict".format(key_elem)) elif duplicates == 'overwrite': new_obj[key_elem] = item else: new_obj[key_elem] = item return new_obj class FilterModule(object): ''' Ansible math jinja2 filters ''' def filters(self): filters = { # general math 'min': min, 'max': max, # exponents and logarithms 'log': logarithm, 'pow': power, 'root': inversepower, # set theory 'unique': unique, 'intersect': intersect, 'difference': difference, 'symmetric_difference': symmetric_difference, 'union': union, # combinatorial 'product': itertools.product, 'permutations': itertools.permutations, 'combinations': itertools.combinations, # computer theory 'human_readable': human_readable, 'human_to_bytes': human_to_bytes, 'rekey_on_member': rekey_on_member, # zip 'zip': zip, 'zip_longest': zip_longest, } return filters
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
test/integration/targets/filter_mathstuff/runme.sh
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
test/integration/targets/filter_mathstuff/runme.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,255
difference filter unexpectedly performs case-insensitive comparison
### Summary When using `difference` filter with element that have just case-difference in the value of the filter, only one is kept. ### Issue Type Feature Request ### Component Name filters ### Ansible Version ```console $ ansible --version ansible 2.9.10 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.8 (default, Mar 4 2021, 21:24:42) [GCC 10.2.0] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 7 ### Steps to Reproduce ```yaml - hosts: localhost vars: list1: - a - A - b - B list2: - b tasks: - debug: msg: "{{ list1 | difference(list2) }}" - debug: msg: "{{ list1 | difference([]) }}" ``` ```shell ansible-playbook playbook.yml ``` ### Expected Results **All** the elements of `list1` that don’t exist in `list2` (as documented) ``` TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "A", "b", "B" ] } ``` ### Actual Results ```console TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "B" ] } TASK [debug] *********************** ok: [localhost] => { "msg": [ "a", "b" ] } ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74255
https://github.com/ansible/ansible/pull/74256
e6a5245d6088894d56b8e0406f8ffed9a57046c3
8698855ffdb36c8e987d80911f1569c8b033e841
2021-04-13T15:44:58Z
python
2021-04-23T17:44:43Z
test/integration/targets/filter_mathstuff/tasks/main.yml
- name: Verify unique's fallback's exception throwing for case_sensitive=True set_fact: unique_fallback_exc1: '{{ [{"foo": "bar", "moo": "cow"}]|unique(case_sensitive=True) }}' ignore_errors: true tags: unique register: unique_fallback_exc1_res - name: Verify unique's fallback's exception throwing for a Hashable thing that triggers TypeError set_fact: unique_fallback_exc2: '{{ True|unique }}' ignore_errors: true tags: unique register: unique_fallback_exc2_res - name: Verify unique tags: unique assert: that: - '[1,2,3,4,4,3,2,1]|unique == [1,2,3,4]' - '["a", "b", "a", "b"]|unique == ["a", "b"]' - '[{"foo": "bar", "moo": "cow"}, {"foo": "bar", "moo": "cow"}, {"haha": "bar", "moo": "mar"}]|unique == [{"foo": "bar", "moo": "cow"}, {"haha": "bar", "moo": "mar"}]' - '[{"foo": "bar", "moo": "cow"}, {"foo": "bar", "moo": "mar"}]|unique == [{"foo": "bar", "moo": "cow"}, {"foo": "bar", "moo": "mar"}]' - '{"foo": "bar", "moo": "cow"}|unique == ["foo", "moo"]' - '"foo"|unique|sort|join == "fo"' - '[1,2,3,4,5]|unique == [1,2,3,4,5]' - unique_fallback_exc1_res is failed - unique_fallback_exc2_res is failed - "\"'bool' object is not iterable\" in unique_fallback_exc2_res.msg" # `unique` will fall back to a custom implementation if the Jinja2 version is # too old to support `jinja2.filters.do_unique`. However, the built-in fallback # is quite different by default. Namely, it ignores the case-sensitivity # setting. This means running: # ['a', 'b', 'A', 'B']|unique # ... will give a different result for someone running Jinja 2.9 vs 2.10 when # do_unique was added. So here, we do a test to see if we have `do_unique`. If # we do, then we do another test to make sure attribute and case_sensitive # work on it. - name: Test for do_unique shell: "{{ansible_python_interpreter}} -c 'from jinja2 import filters; print(\"do_unique\" in dir(filters))'" tags: unique register: do_unique_res - name: Verify unique some more tags: unique assert: that: - '["a", "b", "A", "B"]|unique(case_sensitive=True) == ["a", "b", "A", "B"]' - '[{"foo": "bar", "moo": "cow"}, {"foo": "bar", "moo": "mar"}]|unique(attribute="foo") == [{"foo": "bar", "moo": "cow"}]' - '["a", "b", "A", "B"]|unique == ["a", "b"]' # defaults to case_sensitive=False - "'cannot fall back' in unique_fallback_exc1_res.msg" when: do_unique_res.stdout == 'True' - name: Verify unique some more tags: unique assert: that: - "'does not support case_sensitive' in unique_fallback_exc1_res.msg" when: do_unique_res.stdout == 'False' - name: Verify intersect tags: intersect assert: that: - '[1,2,3]|intersect([4,5,6]) == []' - '[1,2,3]|intersect([3,4,5,6]) == [3]' - '[1,2,3]|intersect([3,2,1]) == [1,2,3]' - '(1,2,3)|intersect((4,5,6))|list == []' - '(1,2,3)|intersect((3,4,5,6))|list == [3]' - name: Verify difference tags: difference assert: that: - '[1,2,3]|difference([4,5,6]) == [1,2,3]' - '[1,2,3]|difference([3,4,5,6]) == [1,2]' - '[1,2,3]|difference([3,2,1]) == []' - '(1,2,3)|difference((4,5,6))|list == [1,2,3]' - '(1,2,3)|difference((3,4,5,6))|list == [1,2]' - name: Verify symmetric_difference tags: symmetric_difference assert: that: - '[1,2,3]|symmetric_difference([4,5,6]) == [1,2,3,4,5,6]' - '[1,2,3]|symmetric_difference([3,4,5,6]) == [1,2,4,5,6]' - '[1,2,3]|symmetric_difference([3,2,1]) == []' - '(1,2,3)|symmetric_difference((4,5,6))|list == [1,2,3,4,5,6]' - '(1,2,3)|symmetric_difference((3,4,5,6))|list == [1,2,4,5,6]' - name: Verify union tags: union assert: that: - '[1,2,3]|union([4,5,6]) == [1,2,3,4,5,6]' - '[1,2,3]|union([3,4,5,6]) == [1,2,3,4,5,6]' - '[1,2,3]|union([3,2,1]) == [1,2,3]' - '(1,2,3)|union((4,5,6))|list == [1,2,3,4,5,6]' - '(1,2,3)|union((3,4,5,6))|list == [1,2,3,4,5,6]' - name: Verify min tags: min assert: that: - '[1000,-99]|min == -99' - '[0,4]|min == 0' - name: Verify max tags: max assert: that: - '[1000,-99]|max == 1000' - '[0,4]|max == 4' - name: Verify logarithm on a value of invalid type set_fact: logarithm_exc1: '{{ "yo"|log }}' ignore_errors: true tags: logarithm register: logarithm_exc1_res - name: Verify logarithm (which is passed to Jinja as "log" because consistency is boring) tags: logarithm assert: that: - '1|log == 0.0' - '100|log(10) == 2.0' - '100|log(10) == 2.0' - '21|log(21) == 1.0' - '(2.3|log(42)|string).startswith("0.222841")' - '(21|log(42)|string).startswith("0.814550")' - logarithm_exc1_res is failed - '"can only be used on numbers" in logarithm_exc1_res.msg' - name: Verify power on a value of invalid type set_fact: power_exc1: '{{ "yo"|pow(4) }}' ignore_errors: true tags: power register: power_exc1_res - name: Verify power (which is passed to Jinja as "pow" because consistency is boring) tags: power assert: that: - '2|pow(4) == 16.0' - power_exc1_res is failed - '"can only be used on numbers" in power_exc1_res.msg' - name: Verify inversepower on a value of invalid type set_fact: inversepower_exc1: '{{ "yo"|root }}' ignore_errors: true tags: inversepower register: inversepower_exc1_res - name: Verify inversepower (which is passed to Jinja as "root" because consistency is boring) tags: inversepower assert: that: - '4|root == 2.0' - '4|root(2) == 2.0' - '9|root(1) == 9.0' - '(9|root(6)|string).startswith("1.4422495")' - inversepower_exc1_res is failed - '"can only be used on numbers" in inversepower_exc1_res.msg' - name: Verify human_readable on invalid input set_fact: human_readable_exc1: '{{ "monkeys"|human_readable }}' ignore_errors: true tags: human_readable register: human_readable_exc1_res - name: Verify human_readable tags: human_readable assert: that: - '"1.00 Bytes" == 1|human_readable' - '"1.00 bits" == 1|human_readable(isbits=True)' - '"10.00 KB" == 10240|human_readable' - '"97.66 MB" == 102400000|human_readable' - '"0.10 GB" == 102400000|human_readable(unit="G")' - '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")' - human_readable_exc1_res is failed - '"failed on bad input" in human_readable_exc1_res.msg' - name: Verify human_to_bytes tags: human_to_bytes assert: that: - "{{'0'|human_to_bytes}} == 0" - "{{'0.1'|human_to_bytes}} == 0" - "{{'0.9'|human_to_bytes}} == 1" - "{{'1'|human_to_bytes}} == 1" - "{{'10.00 KB'|human_to_bytes}} == 10240" - "{{ '11 MB'|human_to_bytes}} == 11534336" - "{{ '1.1 GB'|human_to_bytes}} == 1181116006" - "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240" - name: Verify human_to_bytes (bad string) set_fact: bad_string: "{{ '10.00 foo' | human_to_bytes }}" ignore_errors: yes tags: human_to_bytes register: _human_bytes_test - name: Verify human_to_bytes (bad string) tags: human_to_bytes assert: that: "{{_human_bytes_test.failed}}" - name: Verify that union can be chained tags: union vars: unions: '{{ [1,2,3]|union([4,5])|union([6,7]) }}' assert: that: - "unions|type_debug == 'list'" - "unions|length == 7" - name: Test union with unhashable item tags: union vars: unions: '{{ [1,2,3]|union([{}]) }}' assert: that: - "unions|type_debug == 'list'" - "unions|length == 4" - name: Verify rekey_on_member with invalid "duplicates" kwarg set_fact: rekey_on_member_exc1: '{{ []|rekey_on_member("asdf", duplicates="boo") }}' ignore_errors: true tags: rekey_on_member register: rekey_on_member_exc1_res - name: Verify rekey_on_member with invalid data set_fact: rekey_on_member_exc2: '{{ "minkeys"|rekey_on_member("asdf") }}' ignore_errors: true tags: rekey_on_member register: rekey_on_member_exc2_res - name: Verify rekey_on_member with partially invalid data (list item is not dict) set_fact: rekey_on_member_exc3: '{{ [True]|rekey_on_member("asdf") }}' ignore_errors: true tags: rekey_on_member register: rekey_on_member_exc3_res - name: Verify rekey_on_member with partially invalid data (key not in all dicts) set_fact: rekey_on_member_exc4: '{{ [{"foo": "bar", "baz": "buzz"}, {"hello": 8, "different": "haha"}]|rekey_on_member("foo") }}' ignore_errors: true tags: rekey_on_member register: rekey_on_member_exc4_res - name: Verify rekey_on_member with duplicates and duplicates=error set_fact: rekey_on_member_exc5: '{{ [{"proto": "eigrp", "state": "enabled"}, {"proto": "eigrp", "state": "enabled"}]|rekey_on_member("proto", duplicates="error") }}' ignore_errors: true tags: rekey_on_member register: rekey_on_member_exc5_res - name: Verify rekey_on_member tags: rekey_on_member assert: that: - rekey_on_member_exc1_res is failed - '"duplicates parameter to rekey_on_member has unknown value" in rekey_on_member_exc1_res.msg' - '[{"proto": "eigrp", "state": "enabled"}, {"proto": "ospf", "state": "enabled"}]|rekey_on_member("proto") == {"eigrp": {"proto": "eigrp", "state": "enabled"}, "ospf": {"proto": "ospf", "state": "enabled"}}' - '{"a": {"proto": "eigrp", "state": "enabled"}, "b": {"proto": "ospf", "state": "enabled"}}|rekey_on_member("proto") == {"eigrp": {"proto": "eigrp", "state": "enabled"}, "ospf": {"proto": "ospf", "state": "enabled"}}' - '[{"proto": "eigrp", "state": "enabled"}, {"proto": "eigrp", "state": "enabled"}]|rekey_on_member("proto", duplicates="overwrite") == {"eigrp": {"proto": "eigrp", "state": "enabled"}}' - rekey_on_member_exc2_res is failed - '"Type is not a valid list, set, or dict" in rekey_on_member_exc2_res.msg' - rekey_on_member_exc3_res is failed - '"List item is not a valid dict" in rekey_on_member_exc3_res.msg' - rekey_on_member_exc4_res is failed - '"was not found" in rekey_on_member_exc4_res.msg' - rekey_on_member_exc5_res is failed - '"is not unique, cannot correctly turn into dict" in rekey_on_member_exc5_res.msg' # TODO: For some reason, the coverage tool isn't accounting for the last test # so add another "last test" to fake it... - assert: that: - true
closed
ansible/ansible
https://github.com/ansible/ansible
73,985
unarchive out of memory when file size exceeds available memory
### Summary if the archive is larger than the available memory, unarchive fails citing an out of memory condition. ### Issue Type Bug Report ### Component Name unarchive ### Ansible Version ```console (paste below) $ ansible --version ansible 2.10.4 config file = None configured module search path = ['/Users/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.1 (default, Jan 8 2021, 17:17:43) [Clang 12.0.0 (clang-1200.0.32.28)] ``` ### Configuration ```console (paste below) $ ansible-config dump --only-changed ``` blank ### OS / Environment target OS: ``` ~$ cat /etc/system-release CentOS Linux release 7.6.1810 (Core) ``` ### Steps to Reproduce Description: attempt to unarchive a file that is larger than the available system memory. In my case, the file is 3+GB, system memory is limited to 2GB <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: unzip unarchive: src: "/tmp/{{ glassboxzip }}" dest: "/tmp/{{ glassboxdir }}" remote_src: true keep_newer: true ``` ### Expected Results unarchive to complete successfully. It should not load the entire archive into memory if that will exceed the available memory. instead, it should stream the archive, ie. "read it byte per byte and simultaneously write it back byte per byte" ### Actual Results ```console (paste below) TASK [unzip] ************************************************************************************************** task path: /Users/redacted/Git/r00tedvw/glass/ansible/cligate.yml:84 <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'/root\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018 `" && echo ansible-tmp-1616299064.022249-69807-58584451384018="` echo /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018 `" ) && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'ansible-tmp-1616299064.022249-69807-58584451384018=/root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018\n', b'') Using module file /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible/modules/stat.py <ncwv-cligate01.r00tedvw.local> PUT /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpcatfrd8_ TO /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b '[ncwv-cligate01.r00tedvw.local]' <ncwv-cligate01.r00tedvw.local> (0, b'sftp> put /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpcatfrd8_ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b -tt ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": true, "path": "/tmp/{\'msg\': \'Glassbox.Full.6.3.120B92B84\', \'failed\': False, \'changed\': False}", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "unknown", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": false, "device_type": 0, "mtime": 1616295953.4148412, "block_size": 4096, "inode": 17685705, "isgid": false, "size": 6, "executable": true, "isuid": false, "readable": true, "version": "1278286959", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "unknown", "blocks": 0, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/{\'msg\': \'Glassbox.Full.6.3.120B92B84\', \'failed\': False, \'changed\': False}", "xusr": true, "atime": 1616295953.4148412, "isdir": true, "ctime": 1616295953.4188411, "isblk": false, "wgrp": false, "xgrp": false, "dev": 64768, "roth": false, "isfifo": false, "mode": "0700", "rusr": true, "attributes": []}, "changed": false}\r\n', b'Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n') Using module file /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible/modules/unarchive.py <ncwv-cligate01.r00tedvw.local> PUT /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpdb_8d38l TO /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b '[ncwv-cligate01.r00tedvw.local]' <ncwv-cligate01.r00tedvw.local> (0, b'sftp> put /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpdb_8d38l /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b -tt ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (137, b'/bin/sh: line 1: 1977 Killed /usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\r\n', b'Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n') <ncwv-cligate01.r00tedvw.local> Failed to connect to the host via ssh: Shared connection to ncwv-cligate01.r00tedvw.local closed. <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ > /dev/null 2>&1 && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') fatal: [ncwv-cligate01.r00tedvw.local]: FAILED! => { "changed": false, "module_stderr": "Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n", "module_stdout": "/bin/sh: line 1: 1977 Killed /usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 137 } ``` From /var/log/messages ``` Mar 20 23:50:55 ncwv-cligate01 ansible-get_url: Invoked with force=False owner=None client_key=None group=None use_proxy=True unsafe_writes=False setype=None validate_c erts=True serole=None client_cert=None url_username=ansible_sa dest=/tmp selevel=None force_basic_auth=False sha256sum= http_agent=ansible-httpget url_password=NOT_LOGG ING_PARAMETER url=http://ncwv-artifactory01.r00tedvw.local:8081/artifactory/generic-local/Glassbox.Full.6.3.120B92B84.zip checksum= seuser=None headers=None mode=None t imeout=180 attributes=None backup=False tmp_dest=None Mar 20 23:57:43 ncwv-cligate01 ansible-file: Invoked with src=None force=False setype=None _original_basename=None unsafe_writes=False selevel=None seuser=None recurse= False state=directory _diff_peek=None modification_time=None serole=None follow=True access_time_format=%Y%m%d%H%M.%S modification_time_format=%Y%m%d%H%M.%S access_time =None owner=root group=None path=/tmp/{'msg': 'Glassbox.Full.6.3.120B92B84', 'failed': False, 'changed': False} attributes=None mode=0700 Mar 20 23:57:44 ncwv-cligate01 ansible-ansible.legacy.stat: Invoked with checksum_algorithm=sha1 get_checksum=True follow=True path=/tmp/{'msg': 'Glassbox.Full.6.3.120B 92B84', 'failed': False, 'changed': False} get_md5=False get_mime=True get_attributes=True Mar 20 23:57:45 ncwv-cligate01 ansible-ansible.legacy.unarchive: Invoked with src=/tmp/Glassbox.Full.6.3.120B92B84.zip seuser=None group=None remote_src=True dest=/tmp/ {'msg': 'Glassbox.Full.6.3.120B92B84', 'failed': False, 'changed': False} selevel=None list_files=False keep_newer=True serole=None creates=None unsafe_writes=False set ype=None mode=None exclude=[] owner=None extra_opts=[] attributes=None validate_certs=True Mar 21 00:04:18 ncwv-cligate01 systemd: Failed to start Session 47 of user root. Mar 21 00:04:40 ncwv-cligate01 kernel: gmain invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Mar 21 00:04:40 ncwv-cligate01 kernel: gmain cpuset=/ mems_allowed=0 Mar 21 00:04:40 ncwv-cligate01 kernel: CPU: 0 PID: 5567 Comm: gmain Kdump: loaded Not tainted 3.10.0-957.el7.x86_64 #1 Mar 21 00:04:40 ncwv-cligate01 kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 Mar 21 00:04:40 ncwv-cligate01 kernel: Call Trace: Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef61dc1>] dump_stack+0x19/0x1b Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef5c7ea>] dump_header+0x90/0x229 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8eb0095b>] ? cred_has_capability+0x6b/0x120 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9ba274>] oom_kill_process+0x254/0x3d0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8eb00a3e>] ? selinux_capable+0x2e/0x40 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9baab6>] out_of_memory+0x4b6/0x4f0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef5d2ee>] __alloc_pages_slowpath+0x5d6/0x724 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9c0e95>] __alloc_pages_nodemask+0x405/0x420 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea0dcf8>] alloc_pages_current+0x98/0x110 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9b60d7>] __page_cache_alloc+0x97/0xb0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9b8d38>] filemap_fault+0x298/0x490 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffffc0623d0e>] __xfs_filemap_fault+0x7e/0x1d0 [xfs] Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea4ce02>] ? __inode_permission+0x52/0xd0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffffc0623f0c>] xfs_filemap_fault+0x2c/0x30 [xfs] Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e41da>] __do_fault.isra.59+0x8a/0x100 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e478c>] do_read_fault.isra.61+0x4c/0x1b0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e9134>] handle_pte_fault+0x2f4/0xd10 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea1bcb5>] ? kmem_cache_alloc+0x35/0x1f0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea52b2f>] ? getname_flags+0x4f/0x1a0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9ebc6d>] handle_mm_fault+0x39d/0x9b0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6f5e3>] __do_page_fault+0x203/0x500 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6f915>] do_page_fault+0x35/0x90 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6ba96>] ? error_swapgs+0xa7/0xbd Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6b758>] page_fault+0x28/0x30 Mar 21 00:04:40 ncwv-cligate01 kernel: Mem-Info: Mar 21 00:04:40 ncwv-cligate01 kernel: active_anon:317864 inactive_anon:106802 isolated_anon:0#012 active_file:116 inactive_file:1547 isolated_file:0#012 unevictable:0 dirty:0 writeback:5 unstable:0#012 slab_reclaimable:5386 slab_unreclaimable:7593#012 mapped:246 shmem:203 pagetables:2982 bounce:0#012 free:13073 free_pcp:62 free_cma:0 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA free:7652kB min:380kB low:472kB high:568kB active_anon:3068kB inactive_anon:4336kB active_file:0kB inactive_file:100kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:44kB shmem:44kB slab_reclaimable:132kB slab_unreclaimable:188kB kernel_stack:0kB pagetables:48kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:22 all_unrecl aimable? yes Mar 21 00:04:40 ncwv-cligate01 kernel: lowmem_reserve[]: 0 1819 1819 1819 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA32 free:44640kB min:44672kB low:55840kB high:67008kB active_anon:1268388kB inactive_anon:422872kB active_file:464kB ina ctive_file:6088kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2080640kB managed:1866652kB mlocked:0kB dirty:0kB writeback:20kB mapped:940kB shmem:768k B slab_reclaimable:21412kB slab_unreclaimable:30184kB kernel_stack:2816kB pagetables:11880kB unstable:0kB bounce:0kB free_pcp:248kB local_pcp:248kB free_cma:0kB writeba ck_tmp:0kB pages_scanned:1024 all_unreclaimable? yes Mar 21 00:04:40 ncwv-cligate01 kernel: lowmem_reserve[]: 0 0 0 0 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA: 7*4kB (UM) 17*8kB (UM) 14*16kB (U) 11*32kB (UM) 4*64kB (U) 14*128kB (UM) 11*256kB (UM) 2*512kB (M) 1*1024kB (U) 0*204 8kB 0*4096kB = 7652kB Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA32: 670*4kB (UE) 887*8kB (UEM) 539*16kB (UE) 284*32kB (UE) 112*64kB (UEM) 36*128kB (UEM) 1*256kB (M) 0*512kB 3*1024kB ( M) 1*2048kB (M) 0*4096kB = 44640kB Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Mar 21 00:04:40 ncwv-cligate01 kernel: 2632 total pagecache pages Mar 21 00:04:40 ncwv-cligate01 kernel: 752 pages in swap cache Mar 21 00:04:40 ncwv-cligate01 kernel: Swap cache stats: add 1563807, delete 1563055, find 22310/24310 Mar 21 00:04:40 ncwv-cligate01 kernel: Free swap = 0kB Mar 21 00:04:40 ncwv-cligate01 kernel: Total swap = 2097148kB Mar 21 00:04:40 ncwv-cligate01 kernel: 524158 pages RAM Mar 21 00:04:40 ncwv-cligate01 kernel: 0 pages HighMem/MovableOnly Mar 21 00:04:40 ncwv-cligate01 kernel: 53518 pages reserved Mar 21 00:04:40 ncwv-cligate01 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2914] 0 2914 10052 204 24 60 0 systemd-journal Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2935] 0 2935 31837 0 28 403 0 lvmetad Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2942] 0 2942 11953 1 25 615 -1000 systemd-udevd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5455] 0 5455 15511 19 28 137 -1000 auditd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5481] 999 5481 153242 88 64 2455 0 polkitd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5483] 0 5483 6594 43 19 36 0 systemd-logind Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5486] 81 5486 16617 91 33 99 -900 dbus-daemon Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5489] 998 5489 29446 0 31 116 0 chronyd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5529] 0 5529 31571 25 20 133 0 crond Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5538] 0 5538 24140 1 51 166 0 login Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5559] 0 5559 137374 145 87 898 0 NetworkManager Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5857] 0 5857 26839 0 54 506 0 dhclient Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6052] 0 6052 54102 82 40 145 0 rsyslogd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6053] 0 6053 143457 86 96 2684 0 tuned Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6054] 0 6054 28189 1 57 256 -1000 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6532] 0 6532 22386 8 43 251 0 master Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6546] 89 6546 22429 0 45 256 0 qmgr Mar 21 00:04:40 ncwv-cligate01 kernel: [23455] 0 23455 28859 0 15 96 0 bash Mar 21 00:04:40 ncwv-cligate01 kernel: [23471] 0 23471 39805 0 80 479 0 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [23475] 0 23475 28860 1 14 98 0 bash Mar 21 00:04:40 ncwv-cligate01 kernel: [23653] 0 23653 90526 249 95 6319 0 firewalld Mar 21 00:04:40 ncwv-cligate01 kernel: [31874] 89 31874 22412 1 44 251 0 pickup Mar 21 00:04:40 ncwv-cligate01 kernel: [31936] 0 31936 39787 1 81 427 0 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 1966] 0 1966 28294 0 13 48 0 sh Mar 21 00:04:40 ncwv-cligate01 kernel: [ 1977] 0 1977 979976 422442 1872 505084 0 python Mar 21 00:04:40 ncwv-cligate01 kernel: Out of memory: Kill process 1977 (python) score 906 or sacrifice child Mar 21 00:04:40 ncwv-cligate01 kernel: Killed process 1977 (python) total-vm:3919904kB, anon-rss:1689704kB, file-rss:64kB, shmem-rss:0kB Mar 21 00:05:41 ncwv-cligate01 systemd-logind: Removed session 46. ```
https://github.com/ansible/ansible/issues/73985
https://github.com/ansible/ansible/pull/74094
ecc5a53288993247a808b4190e426ae3ae0b994d
68bdfd005200030ae9668d3a785ed9c10344bee6
2021-03-21T04:13:04Z
python
2021-04-26T12:06:10Z
changelogs/fragments/73985-let-unarchive-handle-huge-files.yml
closed
ansible/ansible
https://github.com/ansible/ansible
73,985
unarchive out of memory when file size exceeds available memory
### Summary if the archive is larger than the available memory, unarchive fails citing an out of memory condition. ### Issue Type Bug Report ### Component Name unarchive ### Ansible Version ```console (paste below) $ ansible --version ansible 2.10.4 config file = None configured module search path = ['/Users/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.1 (default, Jan 8 2021, 17:17:43) [Clang 12.0.0 (clang-1200.0.32.28)] ``` ### Configuration ```console (paste below) $ ansible-config dump --only-changed ``` blank ### OS / Environment target OS: ``` ~$ cat /etc/system-release CentOS Linux release 7.6.1810 (Core) ``` ### Steps to Reproduce Description: attempt to unarchive a file that is larger than the available system memory. In my case, the file is 3+GB, system memory is limited to 2GB <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) - name: unzip unarchive: src: "/tmp/{{ glassboxzip }}" dest: "/tmp/{{ glassboxdir }}" remote_src: true keep_newer: true ``` ### Expected Results unarchive to complete successfully. It should not load the entire archive into memory if that will exceed the available memory. instead, it should stream the archive, ie. "read it byte per byte and simultaneously write it back byte per byte" ### Actual Results ```console (paste below) TASK [unzip] ************************************************************************************************** task path: /Users/redacted/Git/r00tedvw/glass/ansible/cligate.yml:84 <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'/root\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018 `" && echo ansible-tmp-1616299064.022249-69807-58584451384018="` echo /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018 `" ) && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'ansible-tmp-1616299064.022249-69807-58584451384018=/root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018\n', b'') Using module file /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible/modules/stat.py <ncwv-cligate01.r00tedvw.local> PUT /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpcatfrd8_ TO /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b '[ncwv-cligate01.r00tedvw.local]' <ncwv-cligate01.r00tedvw.local> (0, b'sftp> put /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpcatfrd8_ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b -tt ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_stat.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": true, "path": "/tmp/{\'msg\': \'Glassbox.Full.6.3.120B92B84\', \'failed\': False, \'changed\': False}", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "unknown", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": false, "device_type": 0, "mtime": 1616295953.4148412, "block_size": 4096, "inode": 17685705, "isgid": false, "size": 6, "executable": true, "isuid": false, "readable": true, "version": "1278286959", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "unknown", "blocks": 0, "xoth": false, "islnk": false, "nlink": 2, "issock": false, "rgrp": false, "gr_name": "root", "path": "/tmp/{\'msg\': \'Glassbox.Full.6.3.120B92B84\', \'failed\': False, \'changed\': False}", "xusr": true, "atime": 1616295953.4148412, "isdir": true, "ctime": 1616295953.4188411, "isblk": false, "wgrp": false, "xgrp": false, "dev": 64768, "roth": false, "isfifo": false, "mode": "0700", "rusr": true, "attributes": []}, "changed": false}\r\n', b'Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n') Using module file /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible/modules/unarchive.py <ncwv-cligate01.r00tedvw.local> PUT /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpdb_8d38l TO /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b '[ncwv-cligate01.r00tedvw.local]' <ncwv-cligate01.r00tedvw.local> (0, b'sftp> put /Users/redacted/.ansible/tmp/ansible-local-69318vtfyzdif/tmpdb_8d38l /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\n', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b -tt ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (137, b'/bin/sh: line 1: 1977 Killed /usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\r\n', b'Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n') <ncwv-cligate01.r00tedvw.local> Failed to connect to the host via ssh: Shared connection to ncwv-cligate01.r00tedvw.local closed. <ncwv-cligate01.r00tedvw.local> ESTABLISH SSH CONNECTION FOR USER: root <ncwv-cligate01.r00tedvw.local> SSH: EXEC sshpass -d52 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/redacted/.ansible/cp/f06377db2b ncwv-cligate01.r00tedvw.local '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/ > /dev/null 2>&1 && sleep 0'"'"'' <ncwv-cligate01.r00tedvw.local> (0, b'', b'') fatal: [ncwv-cligate01.r00tedvw.local]: FAILED! => { "changed": false, "module_stderr": "Shared connection to ncwv-cligate01.r00tedvw.local closed.\r\n", "module_stdout": "/bin/sh: line 1: 1977 Killed /usr/bin/python /root/.ansible/tmp/ansible-tmp-1616299064.022249-69807-58584451384018/AnsiballZ_unarchive.py\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 137 } ``` From /var/log/messages ``` Mar 20 23:50:55 ncwv-cligate01 ansible-get_url: Invoked with force=False owner=None client_key=None group=None use_proxy=True unsafe_writes=False setype=None validate_c erts=True serole=None client_cert=None url_username=ansible_sa dest=/tmp selevel=None force_basic_auth=False sha256sum= http_agent=ansible-httpget url_password=NOT_LOGG ING_PARAMETER url=http://ncwv-artifactory01.r00tedvw.local:8081/artifactory/generic-local/Glassbox.Full.6.3.120B92B84.zip checksum= seuser=None headers=None mode=None t imeout=180 attributes=None backup=False tmp_dest=None Mar 20 23:57:43 ncwv-cligate01 ansible-file: Invoked with src=None force=False setype=None _original_basename=None unsafe_writes=False selevel=None seuser=None recurse= False state=directory _diff_peek=None modification_time=None serole=None follow=True access_time_format=%Y%m%d%H%M.%S modification_time_format=%Y%m%d%H%M.%S access_time =None owner=root group=None path=/tmp/{'msg': 'Glassbox.Full.6.3.120B92B84', 'failed': False, 'changed': False} attributes=None mode=0700 Mar 20 23:57:44 ncwv-cligate01 ansible-ansible.legacy.stat: Invoked with checksum_algorithm=sha1 get_checksum=True follow=True path=/tmp/{'msg': 'Glassbox.Full.6.3.120B 92B84', 'failed': False, 'changed': False} get_md5=False get_mime=True get_attributes=True Mar 20 23:57:45 ncwv-cligate01 ansible-ansible.legacy.unarchive: Invoked with src=/tmp/Glassbox.Full.6.3.120B92B84.zip seuser=None group=None remote_src=True dest=/tmp/ {'msg': 'Glassbox.Full.6.3.120B92B84', 'failed': False, 'changed': False} selevel=None list_files=False keep_newer=True serole=None creates=None unsafe_writes=False set ype=None mode=None exclude=[] owner=None extra_opts=[] attributes=None validate_certs=True Mar 21 00:04:18 ncwv-cligate01 systemd: Failed to start Session 47 of user root. Mar 21 00:04:40 ncwv-cligate01 kernel: gmain invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Mar 21 00:04:40 ncwv-cligate01 kernel: gmain cpuset=/ mems_allowed=0 Mar 21 00:04:40 ncwv-cligate01 kernel: CPU: 0 PID: 5567 Comm: gmain Kdump: loaded Not tainted 3.10.0-957.el7.x86_64 #1 Mar 21 00:04:40 ncwv-cligate01 kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 Mar 21 00:04:40 ncwv-cligate01 kernel: Call Trace: Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef61dc1>] dump_stack+0x19/0x1b Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef5c7ea>] dump_header+0x90/0x229 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8eb0095b>] ? cred_has_capability+0x6b/0x120 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9ba274>] oom_kill_process+0x254/0x3d0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8eb00a3e>] ? selinux_capable+0x2e/0x40 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9baab6>] out_of_memory+0x4b6/0x4f0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef5d2ee>] __alloc_pages_slowpath+0x5d6/0x724 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9c0e95>] __alloc_pages_nodemask+0x405/0x420 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea0dcf8>] alloc_pages_current+0x98/0x110 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9b60d7>] __page_cache_alloc+0x97/0xb0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9b8d38>] filemap_fault+0x298/0x490 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffffc0623d0e>] __xfs_filemap_fault+0x7e/0x1d0 [xfs] Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea4ce02>] ? __inode_permission+0x52/0xd0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffffc0623f0c>] xfs_filemap_fault+0x2c/0x30 [xfs] Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e41da>] __do_fault.isra.59+0x8a/0x100 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e478c>] do_read_fault.isra.61+0x4c/0x1b0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9e9134>] handle_pte_fault+0x2f4/0xd10 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea1bcb5>] ? kmem_cache_alloc+0x35/0x1f0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ea52b2f>] ? getname_flags+0x4f/0x1a0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8e9ebc6d>] handle_mm_fault+0x39d/0x9b0 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6f5e3>] __do_page_fault+0x203/0x500 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6f915>] do_page_fault+0x35/0x90 Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6ba96>] ? error_swapgs+0xa7/0xbd Mar 21 00:04:40 ncwv-cligate01 kernel: [<ffffffff8ef6b758>] page_fault+0x28/0x30 Mar 21 00:04:40 ncwv-cligate01 kernel: Mem-Info: Mar 21 00:04:40 ncwv-cligate01 kernel: active_anon:317864 inactive_anon:106802 isolated_anon:0#012 active_file:116 inactive_file:1547 isolated_file:0#012 unevictable:0 dirty:0 writeback:5 unstable:0#012 slab_reclaimable:5386 slab_unreclaimable:7593#012 mapped:246 shmem:203 pagetables:2982 bounce:0#012 free:13073 free_pcp:62 free_cma:0 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA free:7652kB min:380kB low:472kB high:568kB active_anon:3068kB inactive_anon:4336kB active_file:0kB inactive_file:100kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:44kB shmem:44kB slab_reclaimable:132kB slab_unreclaimable:188kB kernel_stack:0kB pagetables:48kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:22 all_unrecl aimable? yes Mar 21 00:04:40 ncwv-cligate01 kernel: lowmem_reserve[]: 0 1819 1819 1819 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA32 free:44640kB min:44672kB low:55840kB high:67008kB active_anon:1268388kB inactive_anon:422872kB active_file:464kB ina ctive_file:6088kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2080640kB managed:1866652kB mlocked:0kB dirty:0kB writeback:20kB mapped:940kB shmem:768k B slab_reclaimable:21412kB slab_unreclaimable:30184kB kernel_stack:2816kB pagetables:11880kB unstable:0kB bounce:0kB free_pcp:248kB local_pcp:248kB free_cma:0kB writeba ck_tmp:0kB pages_scanned:1024 all_unreclaimable? yes Mar 21 00:04:40 ncwv-cligate01 kernel: lowmem_reserve[]: 0 0 0 0 Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA: 7*4kB (UM) 17*8kB (UM) 14*16kB (U) 11*32kB (UM) 4*64kB (U) 14*128kB (UM) 11*256kB (UM) 2*512kB (M) 1*1024kB (U) 0*204 8kB 0*4096kB = 7652kB Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 DMA32: 670*4kB (UE) 887*8kB (UEM) 539*16kB (UE) 284*32kB (UE) 112*64kB (UEM) 36*128kB (UEM) 1*256kB (M) 0*512kB 3*1024kB ( M) 1*2048kB (M) 0*4096kB = 44640kB Mar 21 00:04:40 ncwv-cligate01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Mar 21 00:04:40 ncwv-cligate01 kernel: 2632 total pagecache pages Mar 21 00:04:40 ncwv-cligate01 kernel: 752 pages in swap cache Mar 21 00:04:40 ncwv-cligate01 kernel: Swap cache stats: add 1563807, delete 1563055, find 22310/24310 Mar 21 00:04:40 ncwv-cligate01 kernel: Free swap = 0kB Mar 21 00:04:40 ncwv-cligate01 kernel: Total swap = 2097148kB Mar 21 00:04:40 ncwv-cligate01 kernel: 524158 pages RAM Mar 21 00:04:40 ncwv-cligate01 kernel: 0 pages HighMem/MovableOnly Mar 21 00:04:40 ncwv-cligate01 kernel: 53518 pages reserved Mar 21 00:04:40 ncwv-cligate01 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2914] 0 2914 10052 204 24 60 0 systemd-journal Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2935] 0 2935 31837 0 28 403 0 lvmetad Mar 21 00:04:40 ncwv-cligate01 kernel: [ 2942] 0 2942 11953 1 25 615 -1000 systemd-udevd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5455] 0 5455 15511 19 28 137 -1000 auditd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5481] 999 5481 153242 88 64 2455 0 polkitd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5483] 0 5483 6594 43 19 36 0 systemd-logind Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5486] 81 5486 16617 91 33 99 -900 dbus-daemon Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5489] 998 5489 29446 0 31 116 0 chronyd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5529] 0 5529 31571 25 20 133 0 crond Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5538] 0 5538 24140 1 51 166 0 login Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5559] 0 5559 137374 145 87 898 0 NetworkManager Mar 21 00:04:40 ncwv-cligate01 kernel: [ 5857] 0 5857 26839 0 54 506 0 dhclient Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6052] 0 6052 54102 82 40 145 0 rsyslogd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6053] 0 6053 143457 86 96 2684 0 tuned Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6054] 0 6054 28189 1 57 256 -1000 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6532] 0 6532 22386 8 43 251 0 master Mar 21 00:04:40 ncwv-cligate01 kernel: [ 6546] 89 6546 22429 0 45 256 0 qmgr Mar 21 00:04:40 ncwv-cligate01 kernel: [23455] 0 23455 28859 0 15 96 0 bash Mar 21 00:04:40 ncwv-cligate01 kernel: [23471] 0 23471 39805 0 80 479 0 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [23475] 0 23475 28860 1 14 98 0 bash Mar 21 00:04:40 ncwv-cligate01 kernel: [23653] 0 23653 90526 249 95 6319 0 firewalld Mar 21 00:04:40 ncwv-cligate01 kernel: [31874] 89 31874 22412 1 44 251 0 pickup Mar 21 00:04:40 ncwv-cligate01 kernel: [31936] 0 31936 39787 1 81 427 0 sshd Mar 21 00:04:40 ncwv-cligate01 kernel: [ 1966] 0 1966 28294 0 13 48 0 sh Mar 21 00:04:40 ncwv-cligate01 kernel: [ 1977] 0 1977 979976 422442 1872 505084 0 python Mar 21 00:04:40 ncwv-cligate01 kernel: Out of memory: Kill process 1977 (python) score 906 or sacrifice child Mar 21 00:04:40 ncwv-cligate01 kernel: Killed process 1977 (python) total-vm:3919904kB, anon-rss:1689704kB, file-rss:64kB, shmem-rss:0kB Mar 21 00:05:41 ncwv-cligate01 systemd-logind: Removed session 46. ```
https://github.com/ansible/ansible/issues/73985
https://github.com/ansible/ansible/pull/74094
ecc5a53288993247a808b4190e426ae3ae0b994d
68bdfd005200030ae9668d3a785ed9c10344bee6
2021-03-21T04:13:04Z
python
2021-04-26T12:06:10Z
lib/ansible/modules/unarchive.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2012, Michael DeHaan <[email protected]> # Copyright: (c) 2013, Dylan Martin <[email protected]> # Copyright: (c) 2015, Toshio Kuratomi <[email protected]> # Copyright: (c) 2016, Dag Wieers <[email protected]> # Copyright: (c) 2017, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: unarchive version_added: '1.4' short_description: Unpacks an archive after (optionally) copying it from the local machine description: - The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive. - By default, it will copy the source file from the local system to the target before unpacking. - Set C(remote_src=yes) to unpack an archive which already exists on the target. - If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes). - For Windows targets, use the M(community.windows.win_unzip) module instead. options: src: description: - If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the target server to existing archive file to unpack. - If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for simple cases, for full download support use the M(ansible.builtin.get_url) module. type: path required: true dest: description: - Remote absolute path where the archive should be unpacked. type: path required: true copy: description: - If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine. - This option has been deprecated in favor of C(remote_src). - This option is mutually exclusive with C(remote_src). type: bool default: yes creates: description: - If the specified absolute path (file or directory) already exists, this step will B(not) be run. type: path version_added: "1.6" list_files: description: - If set to True, return the list of files that are contained in the tarball. type: bool default: no version_added: "2.0" exclude: description: - List the directory and file entries that you would like to exclude from the unarchive action. - Mutually exclusive with C(include). type: list default: [] elements: str version_added: "2.1" include: description: - List of directory and file entries that you would like to extract from the archive. Only files listed here will be extracted. - Mutually exclusive with C(exclude). type: list default: [] elements: str version_added: "2.11" keep_newer: description: - Do not replace existing files that are newer than files from the archive. type: bool default: no version_added: "2.1" extra_opts: description: - Specify additional options by passing in an array. - Each space-separated command-line option should be a new element of the array. See examples. - Command-line options with multiple elements must use multiple lines in the array, one for each element. type: list elements: str default: "" version_added: "2.1" remote_src: description: - Set to C(yes) to indicate the archived file is already on the remote system and not local to the Ansible controller. - This option is mutually exclusive with C(copy). type: bool default: no version_added: "2.2" validate_certs: description: - This only applies if using a https URL as the source of the file. - This should only set to C(no) used on personally controlled sites using self-signed certificate. - Prior to 2.2 the code worked as if this was set to C(yes). type: bool default: yes version_added: "2.2" extends_documentation_fragment: - decrypt - files todo: - Re-implement tar support using native tarfile module. - Re-implement zip support using native zipfile module. notes: - Requires C(zipinfo) and C(gtar)/C(unzip) command on target host. - Requires C(zstd) command on target host to expand I(.tar.zst) files. - Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar). - Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive. - Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive. - Existing files/directories in the destination which are not in the archive are not touched. This is the same behavior as a normal archive extraction. - Existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if the archive should be unpacked or not. - Supports C(check_mode). seealso: - module: community.general.archive - module: community.general.iso_extract - module: community.windows.win_unzip author: Michael DeHaan ''' EXAMPLES = r''' - name: Extract foo.tgz into /var/lib/foo ansible.builtin.unarchive: src: foo.tgz dest: /var/lib/foo - name: Unarchive a file that is already on the remote machine ansible.builtin.unarchive: src: /tmp/foo.zip dest: /usr/local/bin remote_src: yes - name: Unarchive a file that needs to be downloaded (added in 2.0) ansible.builtin.unarchive: src: https://example.com/example.zip dest: /usr/local/bin remote_src: yes - name: Unarchive a file with extra options ansible.builtin.unarchive: src: /tmp/foo.zip dest: /usr/local/bin extra_opts: - --transform - s/^xxx/yyy/ ''' RETURN = r''' dest: description: Path to the destination directory. returned: always type: str sample: /opt/software files: description: List of all the files in the archive. returned: When I(list_files) is True type: list sample: '["file1", "file2"]' gid: description: Numerical ID of the group that owns the destination directory. returned: always type: int sample: 1000 group: description: Name of the group that owns the destination directory. returned: always type: str sample: "librarians" handler: description: Archive software handler used to extract and decompress the archive. returned: always type: str sample: "TgzArchive" mode: description: String that represents the octal permissions of the destination directory. returned: always type: str sample: "0755" owner: description: Name of the user that owns the destination directory. returned: always type: str sample: "paul" size: description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within. returned: always type: int sample: 36 src: description: - The source archive's path. - If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored. returned: always type: str sample: "/home/paul/test.tar.gz" state: description: State of the destination. Effectively always "directory". returned: always type: str sample: "directory" uid: description: Numerical ID of the user that owns the destination directory. returned: always type: int sample: 1000 ''' import binascii import codecs import datetime import fnmatch import grp import os import platform import pwd import re import stat import time import traceback from zipfile import ZipFile, BadZipfile from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.urls import fetch_file from ansible.module_utils._text import to_bytes, to_native, to_text try: # python 3.3+ from shlex import quote except ImportError: # older python from pipes import quote # String from tar that shows the tar contents are different from the # filesystem OWNER_DIFF_RE = re.compile(r': Uid differs$') GROUP_DIFF_RE = re.compile(r': Gid differs$') MODE_DIFF_RE = re.compile(r': Mode differs$') MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$') # NEWER_DIFF_RE = re.compile(r' is newer or same age.$') EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$') MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$') ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}') INVALID_OWNER_RE = re.compile(r': Invalid owner') INVALID_GROUP_RE = re.compile(r': Invalid group') def crc32(path): ''' Return a CRC32 checksum of a file ''' with open(path, 'rb') as f: file_content = f.read() return binascii.crc32(file_content) & 0xffffffff def shell_escape(string): ''' Quote meta-characters in the args for the unix shell ''' return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string) class UnarchiveError(Exception): pass class ZipArchive(object): def __init__(self, src, b_dest, file_args, module): self.src = src self.b_dest = b_dest self.file_args = file_args self.opts = module.params['extra_opts'] self.module = module self.excludes = module.params['exclude'] self.includes = [] self.include_files = self.module.params['include'] self.cmd_path = self.module.get_bin_path('unzip') self.zipinfocmd_path = self.module.get_bin_path('zipinfo') self._files_in_archive = [] self._infodict = dict() def _permstr_to_octal(self, modestr, umask): ''' Convert a Unix permission string (rw-r--r--) into a mode (0644) ''' revstr = modestr[::-1] mode = 0 for j in range(0, 3): for i in range(0, 3): if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']: mode += 2 ** (i + 3 * j) # The unzip utility does not support setting the stST bits # if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]: # mode += 2 ** (9 + j) return (mode & ~umask) def _legacy_file_list(self): unzip_bin = self.module.get_bin_path('unzip') if not unzip_bin: raise UnarchiveError('Python Zipfile cannot read %s and unzip not found' % self.src) rc, out, err = self.module.run_command([unzip_bin, '-v', self.src]) if rc: raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src) for line in out.splitlines()[3:-2]: fields = line.split(None, 7) self._files_in_archive.append(fields[7]) self._infodict[fields[7]] = int(fields[6]) def _crc32(self, path): if self._infodict: return self._infodict[path] try: archive = ZipFile(self.src) except BadZipfile as e: if e.args[0].lower().startswith('bad magic number'): # Python2.4 can't handle zipfiles with > 64K files. Try using # /usr/bin/unzip instead self._legacy_file_list() else: raise else: try: for item in archive.infolist(): self._infodict[item.filename] = int(item.CRC) except Exception: archive.close() raise UnarchiveError('Unable to list files in the archive') return self._infodict[path] @property def files_in_archive(self): if self._files_in_archive: return self._files_in_archive self._files_in_archive = [] try: archive = ZipFile(self.src) except BadZipfile as e: if e.args[0].lower().startswith('bad magic number'): # Python2.4 can't handle zipfiles with > 64K files. Try using # /usr/bin/unzip instead self._legacy_file_list() else: raise else: try: for member in archive.namelist(): if self.include_files: for include in self.include_files: if fnmatch.fnmatch(member, include): self._files_in_archive.append(to_native(member)) else: exclude_flag = False if self.excludes: for exclude in self.excludes: if not fnmatch.fnmatch(member, exclude): exclude_flag = True break if not exclude_flag: self._files_in_archive.append(to_native(member)) except Exception: archive.close() raise UnarchiveError('Unable to list files in the archive') archive.close() return self._files_in_archive def is_unarchived(self): # BSD unzip doesn't support zipinfo listings with timestamp. cmd = [self.zipinfocmd_path, '-T', '-s', self.src] if self.excludes: cmd.extend(['-x', ] + self.excludes) if self.include_files: cmd.extend(self.include_files) rc, out, err = self.module.run_command(cmd) old_out = out diff = '' out = '' if rc == 0: unarchived = True else: unarchived = False # Get some information related to user/group ownership umask = os.umask(0) os.umask(umask) systemtype = platform.system() # Get current user and group information groups = os.getgroups() run_uid = os.getuid() run_gid = os.getgid() try: run_owner = pwd.getpwuid(run_uid).pw_name except (TypeError, KeyError): run_owner = run_uid try: run_group = grp.getgrgid(run_gid).gr_name except (KeyError, ValueError, OverflowError): run_group = run_gid # Get future user ownership fut_owner = fut_uid = None if self.file_args['owner']: try: tpw = pwd.getpwnam(self.file_args['owner']) except KeyError: try: tpw = pwd.getpwuid(int(self.file_args['owner'])) except (TypeError, KeyError, ValueError): tpw = pwd.getpwuid(run_uid) fut_owner = tpw.pw_name fut_uid = tpw.pw_uid else: try: fut_owner = run_owner except Exception: pass fut_uid = run_uid # Get future group ownership fut_group = fut_gid = None if self.file_args['group']: try: tgr = grp.getgrnam(self.file_args['group']) except (ValueError, KeyError): try: # no need to check isdigit() explicitly here, if we fail to # parse, the ValueError will be caught. tgr = grp.getgrgid(int(self.file_args['group'])) except (KeyError, ValueError, OverflowError): tgr = grp.getgrgid(run_gid) fut_group = tgr.gr_name fut_gid = tgr.gr_gid else: try: fut_group = run_group except Exception: pass fut_gid = run_gid for line in old_out.splitlines(): change = False pcs = line.split(None, 7) if len(pcs) != 8: # Too few fields... probably a piece of the header or footer continue # Check first and seventh field in order to skip header/footer if len(pcs[0]) != 7 and len(pcs[0]) != 10: continue if len(pcs[6]) != 15: continue # Possible entries: # -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660 # -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs # -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF # --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'): continue ztype = pcs[0][0] permstr = pcs[0][1:] version = pcs[1] ostype = pcs[2] size = int(pcs[3]) path = to_text(pcs[7], errors='surrogate_or_strict') # Skip excluded files if path in self.excludes: out += 'Path %s is excluded on request\n' % path continue # Itemized change requires L for symlink if path[-1] == '/': if ztype != 'd': err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype) ftype = 'd' elif ztype == 'l': ftype = 'L' elif ztype == '-': ftype = 'f' elif ztype == '?': ftype = 'f' # Some files may be storing FAT permissions, not Unix permissions # For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666. # This permission will then be modified by the system UMask. # BSD always applies the Umask, even to Unix permissions. # For Unix style permissions on Linux or Mac, we want to use them directly. # So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal if len(permstr) == 6: if path[-1] == '/': permstr = 'rwxrwxrwx' elif permstr == 'rwx---': permstr = 'rwxrwxrwx' else: permstr = 'rw-rw-rw-' file_umask = umask elif 'bsd' in systemtype.lower(): file_umask = umask else: file_umask = 0 # Test string conformity if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr): raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr) # DEBUG # err += "%s%s %10d %s\n" % (ztype, permstr, size, path) b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict')) try: st = os.lstat(b_dest) except Exception: change = True self.includes.append(path) err += 'Path %s is missing\n' % path diff += '>%s++++++.?? %s\n' % (ftype, path) continue # Compare file types if ftype == 'd' and not stat.S_ISDIR(st.st_mode): change = True self.includes.append(path) err += 'File %s already exists, but not as a directory\n' % path diff += 'c%s++++++.?? %s\n' % (ftype, path) continue if ftype == 'f' and not stat.S_ISREG(st.st_mode): change = True unarchived = False self.includes.append(path) err += 'Directory %s already exists, but not as a regular file\n' % path diff += 'c%s++++++.?? %s\n' % (ftype, path) continue if ftype == 'L' and not stat.S_ISLNK(st.st_mode): change = True self.includes.append(path) err += 'Directory %s already exists, but not as a symlink\n' % path diff += 'c%s++++++.?? %s\n' % (ftype, path) continue itemized = list('.%s.......??' % ftype) # Note: this timestamp calculation has a rounding error # somewhere... unzip and this timestamp can be one second off # When that happens, we report a change and re-unzip the file dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6])) timestamp = time.mktime(dt_object.timetuple()) # Compare file timestamps if stat.S_ISREG(st.st_mode): if self.module.params['keep_newer']: if timestamp > st.st_mtime: change = True self.includes.append(path) err += 'File %s is older, replacing file\n' % path itemized[4] = 't' elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime: # Add to excluded files, ignore other changes out += 'File %s is newer, excluding file\n' % path self.excludes.append(path) continue else: if timestamp != st.st_mtime: change = True self.includes.append(path) err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime) itemized[4] = 't' # Compare file sizes if stat.S_ISREG(st.st_mode) and size != st.st_size: change = True err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size) itemized[3] = 's' # Compare file checksums if stat.S_ISREG(st.st_mode): crc = crc32(b_dest) if crc != self._crc32(path): change = True err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc) itemized[2] = 'c' # Compare file permissions # Do not handle permissions of symlinks if ftype != 'L': # Use the new mode provided with the action, if there is one if self.file_args['mode']: if isinstance(self.file_args['mode'], int): mode = self.file_args['mode'] else: try: mode = int(self.file_args['mode'], 8) except Exception as e: try: mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode']) except ValueError as e: self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc()) # Only special files require no umask-handling elif ztype == '?': mode = self._permstr_to_octal(permstr, 0) else: mode = self._permstr_to_octal(permstr, file_umask) if mode != stat.S_IMODE(st.st_mode): change = True itemized[5] = 'p' err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode)) # Compare file user ownership owner = uid = None try: owner = pwd.getpwuid(st.st_uid).pw_name except (TypeError, KeyError): uid = st.st_uid # If we are not root and requested owner is not our user, fail if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid): raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner)) if owner and owner != fut_owner: change = True err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner) itemized[6] = 'o' elif uid and uid != fut_uid: change = True err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid) itemized[6] = 'o' # Compare file group ownership group = gid = None try: group = grp.getgrgid(st.st_gid).gr_name except (KeyError, ValueError, OverflowError): gid = st.st_gid if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups: raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner)) if group and group != fut_group: change = True err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group) itemized[6] = 'g' elif gid and gid != fut_gid: change = True err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid) itemized[6] = 'g' # Register changed files and finalize diff output if change: if path not in self.includes: self.includes.append(path) diff += '%s %s\n' % (''.join(itemized), path) if self.includes: unarchived = False # DEBUG # out = old_out + out return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff) def unarchive(self): cmd = [self.cmd_path, '-o'] if self.opts: cmd.extend(self.opts) cmd.append(self.src) # NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments) # if self.includes: # NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped # cmd.extend(map(shell_escape, self.includes)) if self.excludes: cmd.extend(['-x'] + self.excludes) if self.include_files: cmd.extend(self.include_files) cmd.extend(['-d', self.b_dest]) rc, out, err = self.module.run_command(cmd) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): if not self.cmd_path: return False, 'Command "unzip" not found.' cmd = [self.cmd_path, '-l', self.src] rc, out, err = self.module.run_command(cmd) if rc == 0: return True, None return False, 'Command "%s" could not handle archive.' % self.cmd_path class TgzArchive(object): def __init__(self, src, b_dest, file_args, module): self.src = src self.b_dest = b_dest self.file_args = file_args self.opts = module.params['extra_opts'] self.module = module if self.module.check_mode: self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name) self.excludes = [path.rstrip('/') for path in self.module.params['exclude']] self.include_files = self.module.params['include'] # Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J self.cmd_path = self.module.get_bin_path('gtar', None) if not self.cmd_path: # Fallback to tar self.cmd_path = self.module.get_bin_path('tar') self.zipflag = '-z' self._files_in_archive = [] if self.cmd_path: self.tar_type = self._get_tar_type() else: self.tar_type = None def _get_tar_type(self): cmd = [self.cmd_path, '--version'] (rc, out, err) = self.module.run_command(cmd) tar_type = None if out.startswith('bsdtar'): tar_type = 'bsd' elif out.startswith('tar') and 'GNU' in out: tar_type = 'gnu' return tar_type @property def files_in_archive(self): if self._files_in_archive: return self._files_in_archive cmd = [self.cmd_path, '--list', '-C', self.b_dest] if self.zipflag: cmd.append(self.zipflag) if self.opts: cmd.extend(['--show-transformed-names'] + self.opts) if self.excludes: cmd.extend(['--exclude=' + f for f in self.excludes]) cmd.extend(['-f', self.src]) if self.include_files: cmd.extend(self.include_files) rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')) if rc != 0: raise UnarchiveError('Unable to list files in the archive') for filename in out.splitlines(): # Compensate for locale-related problems in gtar output (octal unicode representation) #11348 # filename = filename.decode('string_escape') filename = to_native(codecs.escape_decode(filename)[0]) # We don't allow absolute filenames. If the user wants to unarchive rooted in "/" # they need to use "dest: '/'". This follows the defaults for gtar, pax, etc. # Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397 if filename.startswith('/'): filename = filename[1:] exclude_flag = False if self.excludes: for exclude in self.excludes: if fnmatch.fnmatch(filename, exclude): exclude_flag = True break if not exclude_flag: self._files_in_archive.append(to_native(filename)) return self._files_in_archive def is_unarchived(self): cmd = [self.cmd_path, '--diff', '-C', self.b_dest] if self.zipflag: cmd.append(self.zipflag) if self.opts: cmd.extend(['--show-transformed-names'] + self.opts) if self.file_args['owner']: cmd.append('--owner=' + quote(self.file_args['owner'])) if self.file_args['group']: cmd.append('--group=' + quote(self.file_args['group'])) if self.module.params['keep_newer']: cmd.append('--keep-newer-files') if self.excludes: cmd.extend(['--exclude=' + f for f in self.excludes]) cmd.extend(['-f', self.src]) if self.include_files: cmd.extend(self.include_files) rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')) # Check whether the differences are in something that we're # setting anyway # What is different unarchived = True old_out = out out = '' run_uid = os.getuid() # When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient # Only way to be sure is to check request with what is on disk (as we do for zip) # Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change for line in old_out.splitlines() + err.splitlines(): # FIXME: Remove the bogus lines from error-output as well ! # Ignore bogus errors on empty filenames (when using --split-component) if EMPTY_FILE_RE.search(line): continue if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line): out += line + '\n' if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line): out += line + '\n' if not self.file_args['mode'] and MODE_DIFF_RE.search(line): out += line + '\n' if MOD_TIME_DIFF_RE.search(line): out += line + '\n' if MISSING_FILE_RE.search(line): out += line + '\n' if INVALID_OWNER_RE.search(line): out += line + '\n' if INVALID_GROUP_RE.search(line): out += line + '\n' if out: unarchived = False return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd) def unarchive(self): cmd = [self.cmd_path, '--extract', '-C', self.b_dest] if self.zipflag: cmd.append(self.zipflag) if self.opts: cmd.extend(['--show-transformed-names'] + self.opts) if self.file_args['owner']: cmd.append('--owner=' + quote(self.file_args['owner'])) if self.file_args['group']: cmd.append('--group=' + quote(self.file_args['group'])) if self.module.params['keep_newer']: cmd.append('--keep-newer-files') if self.excludes: cmd.extend(['--exclude=' + f for f in self.excludes]) cmd.extend(['-f', self.src]) if self.include_files: cmd.extend(self.include_files) rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): if not self.cmd_path: return False, 'Commands "gtar" and "tar" not found.' if self.tar_type != 'gnu': return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type) try: if self.files_in_archive: return True, None except UnarchiveError: return False, 'Command "%s" could not handle archive.' % self.cmd_path # Errors and no files in archive assume that we weren't able to # properly unarchive it return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path # Class to handle tar files that aren't compressed class TarArchive(TgzArchive): def __init__(self, src, b_dest, file_args, module): super(TarArchive, self).__init__(src, b_dest, file_args, module) # argument to tar self.zipflag = '' # Class to handle bzip2 compressed tar files class TarBzipArchive(TgzArchive): def __init__(self, src, b_dest, file_args, module): super(TarBzipArchive, self).__init__(src, b_dest, file_args, module) self.zipflag = '-j' # Class to handle xz compressed tar files class TarXzArchive(TgzArchive): def __init__(self, src, b_dest, file_args, module): super(TarXzArchive, self).__init__(src, b_dest, file_args, module) self.zipflag = '-J' # Class to handle zstd compressed tar files class TarZstdArchive(TgzArchive): def __init__(self, src, b_dest, file_args, module): super(TarZstdArchive, self).__init__(src, b_dest, file_args, module) # GNU Tar supports the --use-compress-program option to # specify which executable to use for # compression/decompression. # # Note: some flavors of BSD tar support --zstd (e.g., FreeBSD # 12.2), but the TgzArchive class only supports GNU Tar. self.zipflag = '--use-compress-program=zstd' # try handlers in order and return the one that works or bail if none work def pick_handler(src, dest, file_args, module): handlers = [ZipArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive] reasons = set() for handler in handlers: obj = handler(src, dest, file_args, module) (can_handle, reason) = obj.can_handle_archive() if can_handle: return obj reasons.add(reason) reason_msg = ' '.join(reasons) module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed. %s' % (src, reason_msg)) def main(): module = AnsibleModule( # not checking because of daisy chain to file module argument_spec=dict( src=dict(type='path', required=True), dest=dict(type='path', required=True), remote_src=dict(type='bool', default=False), creates=dict(type='path'), list_files=dict(type='bool', default=False), keep_newer=dict(type='bool', default=False), exclude=dict(type='list', elements='str', default=[]), include=dict(type='list', elements='str', default=[]), extra_opts=dict(type='list', elements='str', default=[]), validate_certs=dict(type='bool', default=True), ), add_file_common_args=True, # check-mode only works for zip files, we cover that later supports_check_mode=True, mutually_exclusive=[('include', 'exclude')], ) src = module.params['src'] dest = module.params['dest'] b_dest = to_bytes(dest, errors='surrogate_or_strict') remote_src = module.params['remote_src'] file_args = module.load_file_common_arguments(module.params) # did tar file arrive? if not os.path.exists(src): if not remote_src: module.fail_json(msg="Source '%s' failed to transfer" % src) # If remote_src=true, and src= contains ://, try and download the file to a temp directory. elif '://' in src: src = fetch_file(module, src) else: module.fail_json(msg="Source '%s' does not exist" % src) if not os.access(src, os.R_OK): module.fail_json(msg="Source '%s' not readable" % src) # skip working with 0 size archives try: if os.path.getsize(src) == 0: module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src) except Exception as e: module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e))) # is dest OK to receive tar file? if not os.path.isdir(b_dest): module.fail_json(msg="Destination '%s' is not a directory" % dest) handler = pick_handler(src, b_dest, file_args, module) res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src) # do we need to do unpack? check_results = handler.is_unarchived() # DEBUG # res_args['check_results'] = check_results if module.check_mode: res_args['changed'] = not check_results['unarchived'] elif check_results['unarchived']: res_args['changed'] = False else: # do the unpack try: res_args['extract_results'] = handler.unarchive() if res_args['extract_results']['rc'] != 0: module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args) except IOError: module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args) else: res_args['changed'] = True # Get diff if required if check_results.get('diff', False): res_args['diff'] = {'prepared': check_results['diff']} # Run only if we found differences (idempotence) or diff was missing if res_args.get('diff', True) and not module.check_mode: # do we need to change perms? for filename in handler.files_in_archive: file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')) try: res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False) except (IOError, OSError) as e: module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args) if module.params['list_files']: res_args['files'] = handler.files_in_archive module.exit_json(**res_args) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
74,420
ansible-test - running multiple sets of tests simultaneously that use the pypi image fails all after the first, port in use
##### SUMMARY <!--- Explain the problem briefly below --> When launching multiple sets of test with docker simultaneously, for example ``` ansible-test units --docker ``` and ``` ansible-test sanity --docker ``` Any launched after the first have started, but before it finished, can't start because `ansible-test` tries to launch a separate container sharing port 3141, which is already in use by the first one. The second one will try to launch the container twice. The output is like so: ``` Run command: docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0 ERROR: Command "docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0" returned exit status 125. >>> Standard Error docker: Error response from daemon: driver failed programming external connectivity on endpoint keen_darwin (826d23fac0f9f09ba80290edf61c3c37e7d5268b340b18b679a7f27856aa4ac9): Bind for 0.0.0.0:3141 failed: port is already allocated. >>> Standard Output 54a46cf4a883d7765ae2a4818923c23201396e56f5d1675880409aaa599b4a76 WARNING: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". Waiting a few seconds before trying again. Run command: docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0 ERROR: Command "docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0" returned exit status 125. >>> Standard Error docker: Error response from daemon: driver failed programming external connectivity on endpoint bold_sammet (cd2dd8378432a96a3c918c1200985d58fcb270bf47bf629a5aaefb4f1fad2eec): Bind for 0.0.0.0:3141 failed: port is already allocated. >>> Standard Output 960d4344e6a73b7c41c28fa8c59432011be9588e716f2816c61c83eb040a2375 WARNING: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". Waiting a few seconds before trying again. ERROR: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". ``` --- Ideally, if `ansible-test` could somehow detect that such a container is already available, or maybe adapt port numbers dynamically, that'd be great. Unsure if it's possible though. --- Images if helpful: ![image](https://user-images.githubusercontent.com/1260690/116124379-37690e80-a692-11eb-908f-a60d79355329.png) ![image](https://user-images.githubusercontent.com/1260690/116124389-3cc65900-a692-11eb-803c-38b7102fda33.png) ![image](https://user-images.githubusercontent.com/1260690/116124399-40f27680-a692-11eb-8d00-3b2530cbc1a9.png) --- cc @mattclay ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-test ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> (using `hacking/env-setup`) ```paste below ansible [core 2.12.0.dev0] (devel 68bdfd0052) last updated 2021/04/26 12:05:24 (GMT -400) config file = None configured module search path = [u'/home/briantist/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections executable location = /home/briantist/code/ansible/ansible.core/bin/ansible python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0] jinja version = 2.11.2 libyaml = False ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/74420
https://github.com/ansible/ansible/pull/74430
c1879a5011cbc8dacc5be44d55940cf0a05deecd
cb7f4f19717e91930f695fe0be5adc6cacf5162f
2021-04-26T17:28:45Z
python
2021-04-26T21:41:02Z
changelogs/fragments/ansible-test-pypi-container-no-publish.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,420
ansible-test - running multiple sets of tests simultaneously that use the pypi image fails all after the first, port in use
##### SUMMARY <!--- Explain the problem briefly below --> When launching multiple sets of test with docker simultaneously, for example ``` ansible-test units --docker ``` and ``` ansible-test sanity --docker ``` Any launched after the first have started, but before it finished, can't start because `ansible-test` tries to launch a separate container sharing port 3141, which is already in use by the first one. The second one will try to launch the container twice. The output is like so: ``` Run command: docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0 ERROR: Command "docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0" returned exit status 125. >>> Standard Error docker: Error response from daemon: driver failed programming external connectivity on endpoint keen_darwin (826d23fac0f9f09ba80290edf61c3c37e7d5268b340b18b679a7f27856aa4ac9): Bind for 0.0.0.0:3141 failed: port is already allocated. >>> Standard Output 54a46cf4a883d7765ae2a4818923c23201396e56f5d1675880409aaa599b4a76 WARNING: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". Waiting a few seconds before trying again. Run command: docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0 ERROR: Command "docker run --detach -p 3141:3141 quay.io/ansible/pypi-test-container:1.0.0" returned exit status 125. >>> Standard Error docker: Error response from daemon: driver failed programming external connectivity on endpoint bold_sammet (cd2dd8378432a96a3c918c1200985d58fcb270bf47bf629a5aaefb4f1fad2eec): Bind for 0.0.0.0:3141 failed: port is already allocated. >>> Standard Output 960d4344e6a73b7c41c28fa8c59432011be9588e716f2816c61c83eb040a2375 WARNING: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". Waiting a few seconds before trying again. ERROR: Failed to run docker image "quay.io/ansible/pypi-test-container:1.0.0". ``` --- Ideally, if `ansible-test` could somehow detect that such a container is already available, or maybe adapt port numbers dynamically, that'd be great. Unsure if it's possible though. --- Images if helpful: ![image](https://user-images.githubusercontent.com/1260690/116124379-37690e80-a692-11eb-908f-a60d79355329.png) ![image](https://user-images.githubusercontent.com/1260690/116124389-3cc65900-a692-11eb-803c-38b7102fda33.png) ![image](https://user-images.githubusercontent.com/1260690/116124399-40f27680-a692-11eb-8d00-3b2530cbc1a9.png) --- cc @mattclay ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-test ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> (using `hacking/env-setup`) ```paste below ansible [core 2.12.0.dev0] (devel 68bdfd0052) last updated 2021/04/26 12:05:24 (GMT -400) config file = None configured module search path = [u'/home/briantist/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/briantist/code/ansible/ansible.core/lib/ansible ansible collection location = /home/briantist/.ansible/collections:/usr/share/ansible/collections executable location = /home/briantist/code/ansible/ansible.core/bin/ansible python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0] jinja version = 2.11.2 libyaml = False ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/74420
https://github.com/ansible/ansible/pull/74430
c1879a5011cbc8dacc5be44d55940cf0a05deecd
cb7f4f19717e91930f695fe0be5adc6cacf5162f
2021-04-26T17:28:45Z
python
2021-04-26T21:41:02Z
test/lib/ansible_test/_internal/executor.py
"""Execute Ansible tests.""" from __future__ import (absolute_import, division, print_function) __metaclass__ = type import atexit import json import os import datetime import re import time import textwrap import functools import difflib import filecmp import random import string import shutil from . import types as t from .thread import ( WrappedThread, ) from .core_ci import ( AnsibleCoreCI, SshKey, ) from .manage_ci import ( ManageWindowsCI, ManageNetworkCI, get_network_settings, ) from .cloud import ( cloud_filter, cloud_init, get_cloud_environment, get_cloud_platforms, CloudEnvironmentConfig, ) from .io import ( make_dirs, open_text_file, read_text_file, write_text_file, ) from .util import ( ApplicationWarning, ApplicationError, SubprocessError, display, remove_tree, find_executable, raw_command, generate_pip_command, find_python, cmd_quote, ANSIBLE_TEST_DATA_ROOT, ANSIBLE_TEST_CONFIG_ROOT, tempdir, open_zipfile, SUPPORTED_PYTHON_VERSIONS, str_to_version, version_to_str, get_hash, ) from .util_common import ( get_docker_completion, get_remote_completion, get_python_path, intercept_command, named_temporary_file, run_command, write_json_test_results, ResultType, handle_layout_messages, CommonConfig, ) from .docker_util import ( docker_pull, docker_run, docker_inspect, ) from .containers import ( SshConnectionDetail, create_container_hooks, ) from .ansible_util import ( ansible_environment, check_pyyaml, run_playbook, ) from .target import ( IntegrationTarget, walk_internal_targets, walk_posix_integration_targets, walk_network_integration_targets, walk_windows_integration_targets, TIntegrationTarget, ) from .ci import ( get_ci_provider, ) from .classification import ( categorize_changes, ) from .config import ( TestConfig, EnvironmentConfig, IntegrationConfig, NetworkIntegrationConfig, PosixIntegrationConfig, ShellConfig, WindowsIntegrationConfig, TIntegrationConfig, UnitsConfig, SanityConfig, ) from .metadata import ( ChangeDescription, ) from .integration import ( integration_test_environment, integration_test_config_file, setup_common_temp_dir, get_inventory_relative_path, check_inventory, delegate_inventory, ) from .data import ( data_context, ) from .http import ( urlparse, ) def check_startup(): """Checks to perform at startup before running commands.""" check_legacy_modules() def check_legacy_modules(): """Detect conflicts with legacy core/extras module directories to avoid problems later.""" for directory in 'core', 'extras': path = 'lib/ansible/modules/%s' % directory for root, _dir_names, file_names in os.walk(path): if file_names: # the directory shouldn't exist, but if it does, it must contain no files raise ApplicationError('Files prohibited in "%s". ' 'These are most likely legacy modules from version 2.2 or earlier.' % root) def create_shell_command(command): """ :type command: list[str] :rtype: list[str] """ optional_vars = ( 'TERM', ) cmd = ['/usr/bin/env'] cmd += ['%s=%s' % (var, os.environ[var]) for var in optional_vars if var in os.environ] cmd += command return cmd def get_openssl_version(args, python, python_version): # type: (EnvironmentConfig, str, str) -> t.Optional[t.Tuple[int, ...]] """Return the openssl version.""" if not python_version.startswith('2.'): # OpenSSL version checking only works on Python 3.x. # This should be the most accurate, since it is the Python we will be using. version = json.loads(run_command(args, [python, os.path.join(ANSIBLE_TEST_DATA_ROOT, 'sslcheck.py')], capture=True, always=True)[0])['version'] if version: display.info('Detected OpenSSL version %s under Python %s.' % (version_to_str(version), python_version), verbosity=1) return tuple(version) # Fall back to detecting the OpenSSL version from the CLI. # This should provide an adequate solution on Python 2.x. openssl_path = find_executable('openssl', required=False) if openssl_path: try: result = raw_command([openssl_path, 'version'], capture=True)[0] except SubprocessError: result = '' match = re.search(r'^OpenSSL (?P<version>[0-9]+\.[0-9]+\.[0-9]+)', result) if match: version = str_to_version(match.group('version')) display.info('Detected OpenSSL version %s using the openssl CLI.' % version_to_str(version), verbosity=1) return version display.info('Unable to detect OpenSSL version.', verbosity=1) return None def get_setuptools_version(args, python): # type: (EnvironmentConfig, str) -> t.Tuple[int] """Return the setuptools version for the given python.""" try: return str_to_version(raw_command([python, '-c', 'import setuptools; print(setuptools.__version__)'], capture=True)[0]) except SubprocessError: if args.explain: return tuple() # ignore errors in explain mode in case setuptools is not aleady installed raise def install_cryptography(args, python, python_version, pip): # type: (EnvironmentConfig, str, str, t.List[str]) -> None """ Install cryptography for the specified environment. """ # make sure ansible-test's basic requirements are met before continuing # this is primarily to ensure that pip is new enough to facilitate further requirements installation install_ansible_test_requirements(args, pip) # make sure setuptools is available before trying to install cryptography # the installed version of setuptools affects the version of cryptography to install run_command(args, generate_pip_install(pip, '', packages=['setuptools'])) # install the latest cryptography version that the current requirements can support # use a custom constraints file to avoid the normal constraints file overriding the chosen version of cryptography # if not installed here later install commands may try to install an unsupported version due to the presence of older setuptools # this is done instead of upgrading setuptools to allow tests to function with older distribution provided versions of setuptools run_command(args, generate_pip_install(pip, '', packages=[get_cryptography_requirement(args, python, python_version)], constraints=os.path.join(ANSIBLE_TEST_DATA_ROOT, 'cryptography-constraints.txt'))) def get_cryptography_requirement(args, python, python_version): # type: (EnvironmentConfig, str, str) -> str """ Return the correct cryptography requirement for the given python version. The version of cryptography installed depends on the python version, setuptools version and openssl version. """ setuptools_version = get_setuptools_version(args, python) openssl_version = get_openssl_version(args, python, python_version) if setuptools_version >= (18, 5): if python_version == '2.6': # cryptography 2.2+ requires python 2.7+ # see https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst#22---2018-03-19 cryptography = 'cryptography < 2.2' elif openssl_version and openssl_version < (1, 1, 0): # cryptography 3.2 requires openssl 1.1.x or later # see https://cryptography.io/en/latest/changelog.html#v3-2 cryptography = 'cryptography < 3.2' else: # cryptography 3.4+ fails to install on many systems # this is a temporary work-around until a more permanent solution is available cryptography = 'cryptography < 3.4' else: # cryptography 2.1+ requires setuptools 18.5+ # see https://github.com/pyca/cryptography/blob/62287ae18383447585606b9d0765c0f1b8a9777c/setup.py#L26 cryptography = 'cryptography < 2.1' return cryptography def install_command_requirements(args, python_version=None, context=None, enable_pyyaml_check=False): """ :type args: EnvironmentConfig :type python_version: str | None :type context: str | None :type enable_pyyaml_check: bool """ if not args.explain: make_dirs(ResultType.COVERAGE.path) make_dirs(ResultType.DATA.path) if isinstance(args, ShellConfig): if args.raw: return if not args.requirements: return if isinstance(args, ShellConfig): return packages = [] if isinstance(args, TestConfig): if args.coverage: packages.append('coverage') if args.junit: packages.append('junit-xml') if not python_version: python_version = args.python_version python = find_python(python_version) pip = generate_pip_command(python) # skip packages which have aleady been installed for python_version try: package_cache = install_command_requirements.package_cache except AttributeError: package_cache = install_command_requirements.package_cache = {} installed_packages = package_cache.setdefault(python_version, set()) skip_packages = [package for package in packages if package in installed_packages] for package in skip_packages: packages.remove(package) installed_packages.update(packages) if args.command != 'sanity': install_cryptography(args, python, python_version, pip) commands = [generate_pip_install(pip, args.command, packages=packages, context=context)] if isinstance(args, IntegrationConfig): for cloud_platform in get_cloud_platforms(args): commands.append(generate_pip_install(pip, '%s.cloud.%s' % (args.command, cloud_platform))) commands = [cmd for cmd in commands if cmd] if not commands: return # no need to detect changes or run pip check since we are not making any changes # only look for changes when more than one requirements file is needed detect_pip_changes = len(commands) > 1 # first pass to install requirements, changes expected unless environment is already set up install_ansible_test_requirements(args, pip) changes = run_pip_commands(args, pip, commands, detect_pip_changes) if changes: # second pass to check for conflicts in requirements, changes are not expected here changes = run_pip_commands(args, pip, commands, detect_pip_changes) if changes: raise ApplicationError('Conflicts detected in requirements. The following commands reported changes during verification:\n%s' % '\n'.join((' '.join(cmd_quote(c) for c in cmd) for cmd in changes))) if args.pip_check: # ask pip to check for conflicts between installed packages try: run_command(args, pip + ['check', '--disable-pip-version-check'], capture=True) except SubprocessError as ex: if ex.stderr.strip() == 'ERROR: unknown command "check"': display.warning('Cannot check pip requirements for conflicts because "pip check" is not supported.') else: raise if enable_pyyaml_check: # pyyaml may have been one of the requirements that was installed, so perform an optional check for it check_pyyaml(args, python_version, required=False) def install_ansible_test_requirements(args, pip): # type: (EnvironmentConfig, t.List[str]) -> None """Install requirements for ansible-test for the given pip if not already installed.""" try: installed = install_command_requirements.installed except AttributeError: installed = install_command_requirements.installed = set() if tuple(pip) in installed: return # make sure basic ansible-test requirements are met, including making sure that pip is recent enough to support constraints # virtualenvs created by older distributions may include very old pip versions, such as those created in the centos6 test container (pip 6.0.8) run_command(args, generate_pip_install(pip, 'ansible-test', use_constraints=False)) installed.add(tuple(pip)) def run_pip_commands(args, pip, commands, detect_pip_changes=False): """ :type args: EnvironmentConfig :type pip: list[str] :type commands: list[list[str]] :type detect_pip_changes: bool :rtype: list[list[str]] """ changes = [] after_list = pip_list(args, pip) if detect_pip_changes else None for cmd in commands: if not cmd: continue before_list = after_list run_command(args, cmd) after_list = pip_list(args, pip) if detect_pip_changes else None if before_list != after_list: changes.append(cmd) return changes def pip_list(args, pip): """ :type args: EnvironmentConfig :type pip: list[str] :rtype: str """ stdout = run_command(args, pip + ['list'], capture=True)[0] return stdout def generate_pip_install(pip, command, packages=None, constraints=None, use_constraints=True, context=None): """ :type pip: list[str] :type command: str :type packages: list[str] | None :type constraints: str | None :type use_constraints: bool :type context: str | None :rtype: list[str] | None """ constraints = constraints or os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', 'constraints.txt') requirements = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', '%s.txt' % ('%s.%s' % (command, context) if context else command)) content_constraints = None options = [] if os.path.exists(requirements) and os.path.getsize(requirements): options += ['-r', requirements] if command == 'sanity' and data_context().content.is_ansible: requirements = os.path.join(data_context().content.sanity_path, 'code-smell', '%s.requirements.txt' % context) if os.path.exists(requirements) and os.path.getsize(requirements): options += ['-r', requirements] if command == 'units': requirements = os.path.join(data_context().content.unit_path, 'requirements.txt') if os.path.exists(requirements) and os.path.getsize(requirements): options += ['-r', requirements] content_constraints = os.path.join(data_context().content.unit_path, 'constraints.txt') if command in ('integration', 'windows-integration', 'network-integration'): requirements = os.path.join(data_context().content.integration_path, 'requirements.txt') if os.path.exists(requirements) and os.path.getsize(requirements): options += ['-r', requirements] requirements = os.path.join(data_context().content.integration_path, '%s.requirements.txt' % command) if os.path.exists(requirements) and os.path.getsize(requirements): options += ['-r', requirements] content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt') if command.startswith('integration.cloud.'): content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt') if packages: options += packages if not options: return None if use_constraints: if content_constraints and os.path.exists(content_constraints) and os.path.getsize(content_constraints): # listing content constraints first gives them priority over constraints provided by ansible-test options.extend(['-c', content_constraints]) options.extend(['-c', constraints]) return pip + ['install', '--disable-pip-version-check'] + options def command_shell(args): """ :type args: ShellConfig """ if args.delegate: raise Delegate() install_command_requirements(args) cmd = create_shell_command(['bash', '-i']) run_command(args, cmd) def command_posix_integration(args): """ :type args: PosixIntegrationConfig """ handle_layout_messages(data_context().content.integration_messages) inventory_relative_path = get_inventory_relative_path(args) inventory_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, os.path.basename(inventory_relative_path)) all_targets = tuple(walk_posix_integration_targets(include_hidden=True)) internal_targets = command_integration_filter(args, all_targets) managed_connections = None # type: t.Optional[t.List[SshConnectionDetail]] pre_target, post_target = create_container_hooks(args, managed_connections) command_integration_filtered(args, internal_targets, all_targets, inventory_path, pre_target=pre_target, post_target=post_target) def command_network_integration(args): """ :type args: NetworkIntegrationConfig """ handle_layout_messages(data_context().content.integration_messages) inventory_relative_path = get_inventory_relative_path(args) template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template' if args.inventory: inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory) else: inventory_path = os.path.join(data_context().content.root, inventory_relative_path) if args.no_temp_workdir: # temporary solution to keep DCI tests working inventory_exists = os.path.exists(inventory_path) else: inventory_exists = os.path.isfile(inventory_path) if not args.explain and not args.platform and not inventory_exists: raise ApplicationError( 'Inventory not found: %s\n' 'Use --inventory to specify the inventory path.\n' 'Use --platform to provision resources and generate an inventory file.\n' 'See also inventory template: %s' % (inventory_path, template_path) ) check_inventory(args, inventory_path) delegate_inventory(args, inventory_path) all_targets = tuple(walk_network_integration_targets(include_hidden=True)) internal_targets = command_integration_filter(args, all_targets, init_callback=network_init) instances = [] # type: t.List[WrappedThread] if args.platform: get_python_path(args, args.python_executable) # initialize before starting threads configs = dict((config['platform_version'], config) for config in args.metadata.instance_config) for platform_version in args.platform: platform, version = platform_version.split('/', 1) config = configs.get(platform_version) if not config: continue instance = WrappedThread(functools.partial(network_run, args, platform, version, config)) instance.daemon = True instance.start() instances.append(instance) while any(instance.is_alive() for instance in instances): time.sleep(1) remotes = [instance.wait_for_result() for instance in instances] inventory = network_inventory(args, remotes) display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3) if not args.explain: write_text_file(inventory_path, inventory) success = False try: command_integration_filtered(args, internal_targets, all_targets, inventory_path) success = True finally: if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success): for instance in instances: instance.result.stop() def network_init(args, internal_targets): # type: (NetworkIntegrationConfig, t.Tuple[IntegrationTarget, ...]) -> None """Initialize platforms for network integration tests.""" if not args.platform: return if args.metadata.instance_config is not None: return platform_targets = set(a for target in internal_targets for a in target.aliases if a.startswith('network/')) instances = [] # type: t.List[WrappedThread] # generate an ssh key (if needed) up front once, instead of for each instance SshKey(args) for platform_version in args.platform: platform, version = platform_version.split('/', 1) platform_target = 'network/%s/' % platform if platform_target not in platform_targets: display.warning('Skipping "%s" because selected tests do not target the "%s" platform.' % ( platform_version, platform)) continue instance = WrappedThread(functools.partial(network_start, args, platform, version)) instance.daemon = True instance.start() instances.append(instance) while any(instance.is_alive() for instance in instances): time.sleep(1) args.metadata.instance_config = [instance.wait_for_result() for instance in instances] def network_start(args, platform, version): """ :type args: NetworkIntegrationConfig :type platform: str :type version: str :rtype: AnsibleCoreCI """ core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider) core_ci.start() return core_ci.save() def network_run(args, platform, version, config): """ :type args: NetworkIntegrationConfig :type platform: str :type version: str :type config: dict[str, str] :rtype: AnsibleCoreCI """ core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider, load=False) core_ci.load(config) core_ci.wait() manage = ManageNetworkCI(args, core_ci) manage.wait() return core_ci def network_inventory(args, remotes): """ :type args: NetworkIntegrationConfig :type remotes: list[AnsibleCoreCI] :rtype: str """ groups = dict([(remote.platform, []) for remote in remotes]) net = [] for remote in remotes: options = dict( ansible_host=remote.connection.hostname, ansible_user=remote.connection.username, ansible_ssh_private_key_file=os.path.abspath(remote.ssh_key.key), ) settings = get_network_settings(args, remote.platform, remote.version) options.update(settings.inventory_vars) groups[remote.platform].append( '%s %s' % ( remote.name.replace('.', '-'), ' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)), ) ) net.append(remote.platform) groups['net:children'] = net template = '' for group in groups: hosts = '\n'.join(groups[group]) template += textwrap.dedent(""" [%s] %s """) % (group, hosts) inventory = template return inventory def command_windows_integration(args): """ :type args: WindowsIntegrationConfig """ handle_layout_messages(data_context().content.integration_messages) inventory_relative_path = get_inventory_relative_path(args) template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template' if args.inventory: inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory) else: inventory_path = os.path.join(data_context().content.root, inventory_relative_path) if not args.explain and not args.windows and not os.path.isfile(inventory_path): raise ApplicationError( 'Inventory not found: %s\n' 'Use --inventory to specify the inventory path.\n' 'Use --windows to provision resources and generate an inventory file.\n' 'See also inventory template: %s' % (inventory_path, template_path) ) check_inventory(args, inventory_path) delegate_inventory(args, inventory_path) all_targets = tuple(walk_windows_integration_targets(include_hidden=True)) internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init) instances = [] # type: t.List[WrappedThread] managed_connections = [] # type: t.List[SshConnectionDetail] if args.windows: get_python_path(args, args.python_executable) # initialize before starting threads configs = dict((config['platform_version'], config) for config in args.metadata.instance_config) for version in args.windows: config = configs['windows/%s' % version] instance = WrappedThread(functools.partial(windows_run, args, version, config)) instance.daemon = True instance.start() instances.append(instance) while any(instance.is_alive() for instance in instances): time.sleep(1) remotes = [instance.wait_for_result() for instance in instances] inventory = windows_inventory(remotes) display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3) if not args.explain: write_text_file(inventory_path, inventory) for core_ci in remotes: ssh_con = core_ci.connection ssh = SshConnectionDetail(core_ci.name, ssh_con.hostname, 22, ssh_con.username, core_ci.ssh_key.key, shell_type='powershell') managed_connections.append(ssh) elif args.explain: identity_file = SshKey(args).key # mock connection details to prevent tracebacks in explain mode managed_connections = [SshConnectionDetail( name='windows', host='windows', port=22, user='administrator', identity_file=identity_file, shell_type='powershell', )] else: inventory = parse_inventory(args, inventory_path) hosts = get_hosts(inventory, 'windows') identity_file = SshKey(args).key managed_connections = [SshConnectionDetail( name=name, host=config['ansible_host'], port=22, user=config['ansible_user'], identity_file=identity_file, shell_type='powershell', ) for name, config in hosts.items()] if managed_connections: display.info('Generated SSH connection details from inventory:\n%s' % ( '\n'.join('%s %s@%s:%d' % (ssh.name, ssh.user, ssh.host, ssh.port) for ssh in managed_connections)), verbosity=1) pre_target, post_target = create_container_hooks(args, managed_connections) remote_temp_path = None if args.coverage and not args.coverage_check: # Create the remote directory that is writable by everyone. Use Ansible to talk to the remote host. remote_temp_path = 'C:\\ansible_test_coverage_%s' % time.time() playbook_vars = {'remote_temp_path': remote_temp_path} run_playbook(args, inventory_path, 'windows_coverage_setup.yml', playbook_vars) success = False try: command_integration_filtered(args, internal_targets, all_targets, inventory_path, pre_target=pre_target, post_target=post_target, remote_temp_path=remote_temp_path) success = True finally: if remote_temp_path: # Zip up the coverage files that were generated and fetch it back to localhost. with tempdir() as local_temp_path: playbook_vars = {'remote_temp_path': remote_temp_path, 'local_temp_path': local_temp_path} run_playbook(args, inventory_path, 'windows_coverage_teardown.yml', playbook_vars) for filename in os.listdir(local_temp_path): with open_zipfile(os.path.join(local_temp_path, filename)) as coverage_zip: coverage_zip.extractall(ResultType.COVERAGE.path) if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success): for instance in instances: instance.result.stop() # noinspection PyUnusedLocal def windows_init(args, internal_targets): # pylint: disable=locally-disabled, unused-argument """ :type args: WindowsIntegrationConfig :type internal_targets: tuple[IntegrationTarget] """ # generate an ssh key (if needed) up front once, instead of for each instance SshKey(args) if not args.windows: return if args.metadata.instance_config is not None: return instances = [] # type: t.List[WrappedThread] for version in args.windows: instance = WrappedThread(functools.partial(windows_start, args, version)) instance.daemon = True instance.start() instances.append(instance) while any(instance.is_alive() for instance in instances): time.sleep(1) args.metadata.instance_config = [instance.wait_for_result() for instance in instances] def windows_start(args, version): """ :type args: WindowsIntegrationConfig :type version: str :rtype: AnsibleCoreCI """ core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider) core_ci.start() return core_ci.save() def windows_run(args, version, config): """ :type args: WindowsIntegrationConfig :type version: str :type config: dict[str, str] :rtype: AnsibleCoreCI """ core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider, load=False) core_ci.load(config) core_ci.wait() manage = ManageWindowsCI(core_ci) manage.wait() return core_ci def windows_inventory(remotes): """ :type remotes: list[AnsibleCoreCI] :rtype: str """ hosts = [] for remote in remotes: options = dict( ansible_host=remote.connection.hostname, ansible_user=remote.connection.username, ansible_password=remote.connection.password, ansible_port=remote.connection.port, ) # used for the connection_windows_ssh test target if remote.ssh_key: options["ansible_ssh_private_key_file"] = os.path.abspath(remote.ssh_key.key) if remote.name == 'windows-2016': options.update( # force 2016 to use NTLM + HTTP message encryption ansible_connection='winrm', ansible_winrm_server_cert_validation='ignore', ansible_winrm_transport='ntlm', ansible_winrm_scheme='http', ansible_port='5985', ) else: options.update( ansible_connection='winrm', ansible_winrm_server_cert_validation='ignore', ) hosts.append( '%s %s' % ( remote.name.replace('/', '_'), ' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)), ) ) template = """ [windows] %s # support winrm binary module tests (temporary solution) [testhost:children] windows """ template = textwrap.dedent(template) inventory = template % ('\n'.join(hosts)) return inventory def command_integration_filter(args, # type: TIntegrationConfig targets, # type: t.Iterable[TIntegrationTarget] init_callback=None, # type: t.Callable[[TIntegrationConfig, t.Tuple[TIntegrationTarget, ...]], None] ): # type: (...) -> t.Tuple[TIntegrationTarget, ...] """Filter the given integration test targets.""" targets = tuple(target for target in targets if 'hidden/' not in target.aliases) changes = get_changes_filter(args) # special behavior when the --changed-all-target target is selected based on changes if args.changed_all_target in changes: # act as though the --changed-all-target target was in the include list if args.changed_all_mode == 'include' and args.changed_all_target not in args.include: args.include.append(args.changed_all_target) args.delegate_args += ['--include', args.changed_all_target] # act as though the --changed-all-target target was in the exclude list elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude: args.exclude.append(args.changed_all_target) require = args.require + changes exclude = args.exclude internal_targets = walk_internal_targets(targets, args.include, exclude, require) environment_exclude = get_integration_filter(args, internal_targets) environment_exclude += cloud_filter(args, internal_targets) if environment_exclude: exclude += environment_exclude internal_targets = walk_internal_targets(targets, args.include, exclude, require) if not internal_targets: raise AllTargetsSkipped() if args.start_at and not any(target.name == args.start_at for target in internal_targets): raise ApplicationError('Start at target matches nothing: %s' % args.start_at) if init_callback: init_callback(args, internal_targets) cloud_init(args, internal_targets) vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path) if os.path.exists(vars_file_src): def integration_config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None """ Add the integration config vars file to the payload file list. This will preserve the file during delegation even if the file is ignored by source control. """ files.append((vars_file_src, data_context().content.integration_vars_path)) data_context().register_payload_callback(integration_config_callback) if args.delegate: raise Delegate(require=require, exclude=exclude) install_command_requirements(args) return internal_targets def command_integration_filtered( args, # type: IntegrationConfig targets, # type: t.Tuple[IntegrationTarget] all_targets, # type: t.Tuple[IntegrationTarget] inventory_path, # type: str pre_target=None, # type: t.Optional[t.Callable[IntegrationTarget]] post_target=None, # type: t.Optional[t.Callable[IntegrationTarget]] remote_temp_path=None, # type: t.Optional[str] ): """Run integration tests for the specified targets.""" found = False passed = [] failed = [] targets_iter = iter(targets) all_targets_dict = dict((target.name, target) for target in all_targets) setup_errors = [] setup_targets_executed = set() for target in all_targets: for setup_target in target.setup_once + target.setup_always: if setup_target not in all_targets_dict: setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target)) if setup_errors: raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors))) check_pyyaml(args, args.python_version) test_dir = os.path.join(ResultType.TMP.path, 'output_dir') if not args.explain and any('needs/ssh/' in target.aliases for target in targets): max_tries = 20 display.info('SSH service required for tests. Checking to make sure we can connect.') for i in range(1, max_tries + 1): try: run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True) display.info('SSH service responded.') break except SubprocessError: if i == max_tries: raise seconds = 3 display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds) time.sleep(seconds) start_at_task = args.start_at_task results = {} current_environment = None # type: t.Optional[EnvironmentDescription] # common temporary directory path that will be valid on both the controller and the remote # it must be common because it will be referenced in environment variables that are shared across multiple hosts common_temp_path = '/tmp/ansible-test-%s' % ''.join(random.choice(string.ascii_letters + string.digits) for _idx in range(8)) setup_common_temp_dir(args, common_temp_path) try: for target in targets_iter: if args.start_at and not found: found = target.name == args.start_at if not found: continue if args.list_targets: print(target.name) continue tries = 2 if args.retry_on_error else 1 verbosity = args.verbosity cloud_environment = get_cloud_environment(args, target) original_environment = current_environment if current_environment else EnvironmentDescription(args) current_environment = None display.info('>>> Environment Description\n%s' % original_environment, verbosity=3) try: while tries: tries -= 1 try: if cloud_environment: cloud_environment.setup_once() run_setup_targets(args, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, False) start_time = time.time() if pre_target: pre_target(target) run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, True) if not args.explain: # create a fresh test directory for each test target remove_tree(test_dir) make_dirs(test_dir) try: if target.script_path: command_integration_script(args, target, test_dir, inventory_path, common_temp_path, remote_temp_path=remote_temp_path) else: command_integration_role(args, target, start_at_task, test_dir, inventory_path, common_temp_path, remote_temp_path=remote_temp_path) start_at_task = None finally: if post_target: post_target(target) end_time = time.time() results[target.name] = dict( name=target.name, type=target.type, aliases=target.aliases, modules=target.modules, run_time_seconds=int(end_time - start_time), setup_once=target.setup_once, setup_always=target.setup_always, coverage=args.coverage, coverage_label=args.coverage_label, python_version=args.python_version, ) break except SubprocessError: if cloud_environment: cloud_environment.on_failure(target, tries) if not original_environment.validate(target.name, throw=False): raise if not tries: raise display.warning('Retrying test target "%s" with maximum verbosity.' % target.name) display.verbosity = args.verbosity = 6 start_time = time.time() current_environment = EnvironmentDescription(args) end_time = time.time() EnvironmentDescription.check(original_environment, current_environment, target.name, throw=True) results[target.name]['validation_seconds'] = int(end_time - start_time) passed.append(target) except Exception as ex: failed.append(target) if args.continue_on_error: display.error(ex) continue display.notice('To resume at this test target, use the option: --start-at %s' % target.name) next_target = next(targets_iter, None) if next_target: display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name) raise finally: display.verbosity = args.verbosity = verbosity finally: if not args.explain: if args.coverage: coverage_temp_path = os.path.join(common_temp_path, ResultType.COVERAGE.name) coverage_save_path = ResultType.COVERAGE.path for filename in os.listdir(coverage_temp_path): shutil.copy(os.path.join(coverage_temp_path, filename), os.path.join(coverage_save_path, filename)) remove_tree(common_temp_path) result_name = '%s-%s.json' % ( args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0)))) data = dict( targets=results, ) write_json_test_results(ResultType.DATA, result_name, data) if failed: raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % ( len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed))) def parse_inventory(args, inventory_path): # type: (IntegrationConfig, str) -> t.Dict[str, t.Any] """Return a dict parsed from the given inventory file.""" cmd = ['ansible-inventory', '-i', inventory_path, '--list'] env = ansible_environment(args) inventory = json.loads(intercept_command(args, cmd, '', env, capture=True, disable_coverage=True)[0]) return inventory def get_hosts(inventory, group_name): # type: (t.Dict[str, t.Any], str) -> t.Dict[str, t.Dict[str, t.Any]] """Return a dict of hosts from the specified group in the given inventory.""" hostvars = inventory.get('_meta', {}).get('hostvars', {}) group = inventory.get(group_name, {}) host_names = group.get('hosts', []) hosts = dict((name, hostvars[name]) for name in host_names) return hosts def run_pypi_proxy(args): # type: (EnvironmentConfig) -> t.Tuple[t.Optional[str], t.Optional[str]] """Run a PyPI proxy container, returning the container ID and proxy endpoint.""" use_proxy = False if args.docker_raw == 'centos6': use_proxy = True # python 2.6 is the only version available if args.docker_raw == 'default': if args.python == '2.6': use_proxy = True # python 2.6 requested elif not args.python and isinstance(args, (SanityConfig, UnitsConfig, ShellConfig)): use_proxy = True # multiple versions (including python 2.6) can be used if args.docker_raw and args.pypi_proxy: use_proxy = True # manual override to force proxy usage if not use_proxy: return None, None proxy_image = 'quay.io/ansible/pypi-test-container:1.0.0' port = 3141 options = [ '--detach', '-p', '%d:%d' % (port, port), ] docker_pull(args, proxy_image) container_id = docker_run(args, proxy_image, options=options) container = docker_inspect(args, container_id) container_ip = container.get_ip_address() if not container_ip: raise Exception('PyPI container IP not available.') endpoint = 'http://%s:%d/root/pypi/+simple/' % (container_ip, port) return container_id, endpoint def configure_pypi_proxy(args): # type: (CommonConfig) -> None """Configure the environment to use a PyPI proxy, if present.""" if not isinstance(args, EnvironmentConfig): return if args.pypi_endpoint: configure_pypi_block_access() configure_pypi_proxy_pip(args) configure_pypi_proxy_easy_install(args) def configure_pypi_block_access(): # type: () -> None """Block direct access to PyPI to ensure proxy configurations are always used.""" if os.getuid() != 0: display.warning('Skipping custom hosts block for PyPI for non-root user.') return hosts_path = '/etc/hosts' hosts_block = ''' 127.0.0.1 pypi.org pypi.python.org files.pythonhosted.org ''' def hosts_cleanup(): display.info('Removing custom PyPI hosts entries: %s' % hosts_path, verbosity=1) with open(hosts_path) as hosts_file_read: content = hosts_file_read.read() content = content.replace(hosts_block, '') with open(hosts_path, 'w') as hosts_file_write: hosts_file_write.write(content) display.info('Injecting custom PyPI hosts entries: %s' % hosts_path, verbosity=1) display.info('Config: %s\n%s' % (hosts_path, hosts_block), verbosity=3) with open(hosts_path, 'a') as hosts_file: hosts_file.write(hosts_block) atexit.register(hosts_cleanup) def configure_pypi_proxy_pip(args): # type: (EnvironmentConfig) -> None """Configure a custom index for pip based installs.""" pypi_hostname = urlparse(args.pypi_endpoint)[1].split(':')[0] pip_conf_path = os.path.expanduser('~/.pip/pip.conf') pip_conf = ''' [global] index-url = {0} trusted-host = {1} '''.format(args.pypi_endpoint, pypi_hostname).strip() def pip_conf_cleanup(): display.info('Removing custom PyPI config: %s' % pip_conf_path, verbosity=1) os.remove(pip_conf_path) if os.path.exists(pip_conf_path): raise ApplicationError('Refusing to overwrite existing file: %s' % pip_conf_path) display.info('Injecting custom PyPI config: %s' % pip_conf_path, verbosity=1) display.info('Config: %s\n%s' % (pip_conf_path, pip_conf), verbosity=3) write_text_file(pip_conf_path, pip_conf, True) atexit.register(pip_conf_cleanup) def configure_pypi_proxy_easy_install(args): # type: (EnvironmentConfig) -> None """Configure a custom index for easy_install based installs.""" pydistutils_cfg_path = os.path.expanduser('~/.pydistutils.cfg') pydistutils_cfg = ''' [easy_install] index_url = {0} '''.format(args.pypi_endpoint).strip() if os.path.exists(pydistutils_cfg_path): raise ApplicationError('Refusing to overwrite existing file: %s' % pydistutils_cfg_path) def pydistutils_cfg_cleanup(): display.info('Removing custom PyPI config: %s' % pydistutils_cfg_path, verbosity=1) os.remove(pydistutils_cfg_path) display.info('Injecting custom PyPI config: %s' % pydistutils_cfg_path, verbosity=1) display.info('Config: %s\n%s' % (pydistutils_cfg_path, pydistutils_cfg), verbosity=3) write_text_file(pydistutils_cfg_path, pydistutils_cfg, True) atexit.register(pydistutils_cfg_cleanup) def run_setup_targets(args, test_dir, target_names, targets_dict, targets_executed, inventory_path, temp_path, always): """ :type args: IntegrationConfig :type test_dir: str :type target_names: list[str] :type targets_dict: dict[str, IntegrationTarget] :type targets_executed: set[str] :type inventory_path: str :type temp_path: str :type always: bool """ for target_name in target_names: if not always and target_name in targets_executed: continue target = targets_dict[target_name] if not args.explain: # create a fresh test directory for each test target remove_tree(test_dir) make_dirs(test_dir) if target.script_path: command_integration_script(args, target, test_dir, inventory_path, temp_path) else: command_integration_role(args, target, None, test_dir, inventory_path, temp_path) targets_executed.add(target_name) def integration_environment(args, target, test_dir, inventory_path, ansible_config, env_config): """ :type args: IntegrationConfig :type target: IntegrationTarget :type test_dir: str :type inventory_path: str :type ansible_config: str | None :type env_config: CloudEnvironmentConfig | None :rtype: dict[str, str] """ env = ansible_environment(args, ansible_config=ansible_config) callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else []) integration = dict( JUNIT_OUTPUT_DIR=ResultType.JUNIT.path, ANSIBLE_CALLBACKS_ENABLED=','.join(sorted(set(callback_plugins))), ANSIBLE_TEST_CI=args.metadata.ci_provider or get_ci_provider().code, ANSIBLE_TEST_COVERAGE='check' if args.coverage_check else ('yes' if args.coverage else ''), OUTPUT_DIR=test_dir, INVENTORY_PATH=os.path.abspath(inventory_path), ) if args.debug_strategy: env.update(dict(ANSIBLE_STRATEGY='debug')) if 'non_local/' in target.aliases: if args.coverage: display.warning('Skipping coverage reporting on Ansible modules for non-local test: %s' % target.name) env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER='')) env.update(integration) return env def command_integration_script(args, target, test_dir, inventory_path, temp_path, remote_temp_path=None): """ :type args: IntegrationConfig :type target: IntegrationTarget :type test_dir: str :type inventory_path: str :type temp_path: str :type remote_temp_path: str | None """ display.info('Running %s integration test script' % target.name) env_config = None if isinstance(args, PosixIntegrationConfig): cloud_environment = get_cloud_environment(args, target) if cloud_environment: env_config = cloud_environment.get_environment_config() if env_config: display.info('>>> Environment Config\n%s' % json.dumps(dict( env_vars=env_config.env_vars, ansible_vars=env_config.ansible_vars, callback_plugins=env_config.callback_plugins, module_defaults=env_config.module_defaults, ), indent=4, sort_keys=True), verbosity=3) with integration_test_environment(args, target, inventory_path) as test_env: cmd = ['./%s' % os.path.basename(target.script_path)] if args.verbosity: cmd.append('-' + ('v' * args.verbosity)) env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config) cwd = os.path.join(test_env.targets_dir, target.relative_path) env.update(dict( # support use of adhoc ansible commands in collections without specifying the fully qualified collection name ANSIBLE_PLAYBOOK_DIR=cwd, )) if env_config and env_config.env_vars: env.update(env_config.env_vars) with integration_test_config_file(args, env_config, test_env.integration_dir) as config_path: if config_path: cmd += ['-e', '@%s' % config_path] module_coverage = 'non_local/' not in target.aliases intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path, remote_temp_path=remote_temp_path, module_coverage=module_coverage) def command_integration_role(args, target, start_at_task, test_dir, inventory_path, temp_path, remote_temp_path=None): """ :type args: IntegrationConfig :type target: IntegrationTarget :type start_at_task: str | None :type test_dir: str :type inventory_path: str :type temp_path: str :type remote_temp_path: str | None """ display.info('Running %s integration test role' % target.name) env_config = None vars_files = [] variables = dict( output_dir=test_dir, ) if isinstance(args, WindowsIntegrationConfig): hosts = 'windows' gather_facts = False variables.update(dict( win_output_dir=r'C:\ansible_testing', )) elif isinstance(args, NetworkIntegrationConfig): hosts = target.network_platform gather_facts = False else: hosts = 'testhost' gather_facts = True if not isinstance(args, NetworkIntegrationConfig): cloud_environment = get_cloud_environment(args, target) if cloud_environment: env_config = cloud_environment.get_environment_config() if env_config: display.info('>>> Environment Config\n%s' % json.dumps(dict( env_vars=env_config.env_vars, ansible_vars=env_config.ansible_vars, callback_plugins=env_config.callback_plugins, module_defaults=env_config.module_defaults, ), indent=4, sort_keys=True), verbosity=3) with integration_test_environment(args, target, inventory_path) as test_env: if os.path.exists(test_env.vars_file): vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir)) play = dict( hosts=hosts, gather_facts=gather_facts, vars_files=vars_files, vars=variables, roles=[ target.name, ], ) if env_config: if env_config.ansible_vars: variables.update(env_config.ansible_vars) play.update(dict( environment=env_config.env_vars, module_defaults=env_config.module_defaults, )) playbook = json.dumps([play], indent=4, sort_keys=True) with named_temporary_file(args=args, directory=test_env.integration_dir, prefix='%s-' % target.name, suffix='.yml', content=playbook) as playbook_path: filename = os.path.basename(playbook_path) display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3) cmd = ['ansible-playbook', filename, '-i', os.path.relpath(test_env.inventory_path, test_env.integration_dir)] if start_at_task: cmd += ['--start-at-task', start_at_task] if args.tags: cmd += ['--tags', args.tags] if args.skip_tags: cmd += ['--skip-tags', args.skip_tags] if args.diff: cmd += ['--diff'] if isinstance(args, NetworkIntegrationConfig): if args.testcase: cmd += ['-e', 'testcase=%s' % args.testcase] if args.verbosity: cmd.append('-' + ('v' * args.verbosity)) env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config) cwd = test_env.integration_dir env.update(dict( # support use of adhoc ansible commands in collections without specifying the fully qualified collection name ANSIBLE_PLAYBOOK_DIR=cwd, )) if env_config and env_config.env_vars: env.update(env_config.env_vars) env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir module_coverage = 'non_local/' not in target.aliases intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path, remote_temp_path=remote_temp_path, module_coverage=module_coverage) def get_changes_filter(args): """ :type args: TestConfig :rtype: list[str] """ paths = detect_changes(args) if not args.metadata.change_description: if paths: changes = categorize_changes(args, paths, args.command) else: changes = ChangeDescription() args.metadata.change_description = changes if paths is None: return [] # change detection not enabled, do not filter targets if not paths: raise NoChangesDetected() if args.metadata.change_description.targets is None: raise NoTestsForChanges() return args.metadata.change_description.targets def detect_changes(args): """ :type args: TestConfig :rtype: list[str] | None """ if args.changed: paths = get_ci_provider().detect_changes(args) elif args.changed_from or args.changed_path: paths = args.changed_path or [] if args.changed_from: paths += read_text_file(args.changed_from).splitlines() else: return None # change detection not enabled if paths is None: return None # act as though change detection not enabled, do not filter targets display.info('Detected changes in %d file(s).' % len(paths)) for path in paths: display.info(path, verbosity=1) return paths def get_integration_filter(args, targets): """ :type args: IntegrationConfig :type targets: tuple[IntegrationTarget] :rtype: list[str] """ if args.docker: return get_integration_docker_filter(args, targets) if args.remote: return get_integration_remote_filter(args, targets) return get_integration_local_filter(args, targets) def common_integration_filter(args, targets, exclude): """ :type args: IntegrationConfig :type targets: tuple[IntegrationTarget] :type exclude: list[str] """ override_disabled = set(target for target in args.include if target.startswith('disabled/')) if not args.allow_disabled: skip = 'disabled/' override = [target.name for target in targets if override_disabled & set(target.aliases)] skipped = [target.name for target in targets if skip in target.aliases and target.name not in override] if skipped: exclude.extend(skipped) display.warning('Excluding tests marked "%s" which require --allow-disabled or prefixing with "disabled/": %s' % (skip.rstrip('/'), ', '.join(skipped))) override_unsupported = set(target for target in args.include if target.startswith('unsupported/')) if not args.allow_unsupported: skip = 'unsupported/' override = [target.name for target in targets if override_unsupported & set(target.aliases)] skipped = [target.name for target in targets if skip in target.aliases and target.name not in override] if skipped: exclude.extend(skipped) display.warning('Excluding tests marked "%s" which require --allow-unsupported or prefixing with "unsupported/": %s' % (skip.rstrip('/'), ', '.join(skipped))) override_unstable = set(target for target in args.include if target.startswith('unstable/')) if args.allow_unstable_changed: override_unstable |= set(args.metadata.change_description.focused_targets or []) if not args.allow_unstable: skip = 'unstable/' override = [target.name for target in targets if override_unstable & set(target.aliases)] skipped = [target.name for target in targets if skip in target.aliases and target.name not in override] if skipped: exclude.extend(skipped) display.warning('Excluding tests marked "%s" which require --allow-unstable or prefixing with "unstable/": %s' % (skip.rstrip('/'), ', '.join(skipped))) # only skip a Windows test if using --windows and all the --windows versions are defined in the aliases as skip/windows/%s if isinstance(args, WindowsIntegrationConfig) and args.windows: all_skipped = [] not_skipped = [] for target in targets: if "skip/windows/" not in target.aliases: continue skip_valid = [] skip_missing = [] for version in args.windows: if "skip/windows/%s/" % version in target.aliases: skip_valid.append(version) else: skip_missing.append(version) if skip_missing and skip_valid: not_skipped.append((target.name, skip_valid, skip_missing)) elif skip_valid: all_skipped.append(target.name) if all_skipped: exclude.extend(all_skipped) skip_aliases = ["skip/windows/%s/" % w for w in args.windows] display.warning('Excluding tests marked "%s" which are set to skip with --windows %s: %s' % ('", "'.join(skip_aliases), ', '.join(args.windows), ', '.join(all_skipped))) if not_skipped: for target, skip_valid, skip_missing in not_skipped: # warn when failing to skip due to lack of support for skipping only some versions display.warning('Including test "%s" which was marked to skip for --windows %s but not %s.' % (target, ', '.join(skip_valid), ', '.join(skip_missing))) def get_integration_local_filter(args, targets): """ :type args: IntegrationConfig :type targets: tuple[IntegrationTarget] :rtype: list[str] """ exclude = [] common_integration_filter(args, targets, exclude) if not args.allow_root and os.getuid() != 0: skip = 'needs/root/' skipped = [target.name for target in targets if skip in target.aliases] if skipped: exclude.append(skip) display.warning('Excluding tests marked "%s" which require --allow-root or running as root: %s' % (skip.rstrip('/'), ', '.join(skipped))) override_destructive = set(target for target in args.include if target.startswith('destructive/')) if not args.allow_destructive: skip = 'destructive/' override = [target.name for target in targets if override_destructive & set(target.aliases)] skipped = [target.name for target in targets if skip in target.aliases and target.name not in override] if skipped: exclude.extend(skipped) display.warning('Excluding tests marked "%s" which require --allow-destructive or prefixing with "destructive/" to run locally: %s' % (skip.rstrip('/'), ', '.join(skipped))) exclude_targets_by_python_version(targets, args.python_version, exclude) return exclude def get_integration_docker_filter(args, targets): """ :type args: IntegrationConfig :type targets: tuple[IntegrationTarget] :rtype: list[str] """ exclude = [] common_integration_filter(args, targets, exclude) skip = 'skip/docker/' skipped = [target.name for target in targets if skip in target.aliases] if skipped: exclude.append(skip) display.warning('Excluding tests marked "%s" which cannot run under docker: %s' % (skip.rstrip('/'), ', '.join(skipped))) if not args.docker_privileged: skip = 'needs/privileged/' skipped = [target.name for target in targets if skip in target.aliases] if skipped: exclude.append(skip) display.warning('Excluding tests marked "%s" which require --docker-privileged to run under docker: %s' % (skip.rstrip('/'), ', '.join(skipped))) python_version = get_python_version(args, get_docker_completion(), args.docker_raw) exclude_targets_by_python_version(targets, python_version, exclude) return exclude def get_integration_remote_filter(args, targets): """ :type args: IntegrationConfig :type targets: tuple[IntegrationTarget] :rtype: list[str] """ remote = args.parsed_remote exclude = [] common_integration_filter(args, targets, exclude) skips = { 'skip/%s' % remote.platform: remote.platform, 'skip/%s/%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version), 'skip/%s%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version), # legacy syntax, use above format } if remote.arch: skips.update({ 'skip/%s/%s' % (remote.arch, remote.platform): '%s on %s' % (remote.platform, remote.arch), 'skip/%s/%s/%s' % (remote.arch, remote.platform, remote.version): '%s %s on %s' % (remote.platform, remote.version, remote.arch), }) for skip, description in skips.items(): skipped = [target.name for target in targets if skip in target.skips] if skipped: exclude.append(skip + '/') display.warning('Excluding tests marked "%s" which are not supported on %s: %s' % (skip, description, ', '.join(skipped))) python_version = get_python_version(args, get_remote_completion(), args.remote) exclude_targets_by_python_version(targets, python_version, exclude) return exclude def exclude_targets_by_python_version(targets, python_version, exclude): """ :type targets: tuple[IntegrationTarget] :type python_version: str :type exclude: list[str] """ if not python_version: display.warning('Python version unknown. Unable to skip tests based on Python version.') return python_major_version = python_version.split('.')[0] skip = 'skip/python%s/' % python_version skipped = [target.name for target in targets if skip in target.aliases] if skipped: exclude.append(skip) display.warning('Excluding tests marked "%s" which are not supported on python %s: %s' % (skip.rstrip('/'), python_version, ', '.join(skipped))) skip = 'skip/python%s/' % python_major_version skipped = [target.name for target in targets if skip in target.aliases] if skipped: exclude.append(skip) display.warning('Excluding tests marked "%s" which are not supported on python %s: %s' % (skip.rstrip('/'), python_version, ', '.join(skipped))) def get_python_version(args, configs, name): """ :type args: EnvironmentConfig :type configs: dict[str, dict[str, str]] :type name: str """ config = configs.get(name, {}) config_python = config.get('python') if not config or not config_python: if args.python: return args.python display.warning('No Python version specified. ' 'Use completion config or the --python option to specify one.', unique=True) return '' # failure to provide a version may result in failures or reduced functionality later supported_python_versions = config_python.split(',') default_python_version = supported_python_versions[0] if args.python and args.python not in supported_python_versions: raise ApplicationError('Python %s is not supported by %s. Supported Python version(s) are: %s' % ( args.python, name, ', '.join(sorted(supported_python_versions)))) python_version = args.python or default_python_version return python_version def get_python_interpreter(args, configs, name): """ :type args: EnvironmentConfig :type configs: dict[str, dict[str, str]] :type name: str """ if args.python_interpreter: return args.python_interpreter config = configs.get(name, {}) if not config: if args.python: guess = 'python%s' % args.python else: guess = 'python' display.warning('Using "%s" as the Python interpreter. ' 'Use completion config or the --python-interpreter option to specify the path.' % guess, unique=True) return guess python_version = get_python_version(args, configs, name) python_dir = config.get('python_dir', '/usr/bin') python_interpreter = os.path.join(python_dir, 'python%s' % python_version) python_interpreter = config.get('python%s' % python_version, python_interpreter) return python_interpreter class EnvironmentDescription: """Description of current running environment.""" def __init__(self, args): """Initialize snapshot of environment configuration. :type args: IntegrationConfig """ self.args = args if self.args.explain: self.data = {} return warnings = [] versions = [''] versions += SUPPORTED_PYTHON_VERSIONS versions += list(set(v.split('.')[0] for v in SUPPORTED_PYTHON_VERSIONS)) version_check = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'versions.py') python_paths = dict((v, find_executable('python%s' % v, required=False)) for v in sorted(versions)) pip_paths = dict((v, find_executable('pip%s' % v, required=False)) for v in sorted(versions)) program_versions = dict((v, self.get_version([python_paths[v], version_check], warnings)) for v in sorted(python_paths) if python_paths[v]) pip_interpreters = dict((v, self.get_shebang(pip_paths[v])) for v in sorted(pip_paths) if pip_paths[v]) known_hosts_hash = get_hash(os.path.expanduser('~/.ssh/known_hosts')) for version in sorted(versions): self.check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings) for warning in warnings: display.warning(warning, unique=True) self.data = dict( python_paths=python_paths, pip_paths=pip_paths, program_versions=program_versions, pip_interpreters=pip_interpreters, known_hosts_hash=known_hosts_hash, warnings=warnings, ) @staticmethod def check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings): """ :type version: str :param python_paths: dict[str, str] :param pip_paths: dict[str, str] :param pip_interpreters: dict[str, str] :param warnings: list[str] """ python_label = 'Python%s' % (' %s' % version if version else '') pip_path = pip_paths.get(version) python_path = python_paths.get(version) if not python_path or not pip_path: # skip checks when either python or pip are missing for this version return pip_shebang = pip_interpreters.get(version) match = re.search(r'#!\s*(?P<command>[^\s]+)', pip_shebang) if not match: warnings.append('A %s pip was found at "%s", but it does not have a valid shebang: %s' % (python_label, pip_path, pip_shebang)) return pip_interpreter = os.path.realpath(match.group('command')) python_interpreter = os.path.realpath(python_path) if pip_interpreter == python_interpreter: return try: identical = filecmp.cmp(pip_interpreter, python_interpreter) except OSError: identical = False if identical: return warnings.append('A %s pip was found at "%s", but it uses interpreter "%s" instead of "%s".' % ( python_label, pip_path, pip_interpreter, python_interpreter)) def __str__(self): """ :rtype: str """ return json.dumps(self.data, sort_keys=True, indent=4) def validate(self, target_name, throw): """ :type target_name: str :type throw: bool :rtype: bool """ current = EnvironmentDescription(self.args) return self.check(self, current, target_name, throw) @staticmethod def check(original, current, target_name, throw): """ :type original: EnvironmentDescription :type current: EnvironmentDescription :type target_name: str :type throw: bool :rtype: bool """ original_json = str(original) current_json = str(current) if original_json == current_json: return True unified_diff = '\n'.join(difflib.unified_diff( a=original_json.splitlines(), b=current_json.splitlines(), fromfile='original.json', tofile='current.json', lineterm='', )) message = ('Test target "%s" has changed the test environment!\n' 'If these changes are necessary, they must be reverted before the test finishes.\n' '>>> Original Environment\n' '%s\n' '>>> Current Environment\n' '%s\n' '>>> Environment Diff\n' '%s' % (target_name, original_json, current_json, unified_diff)) if throw: raise ApplicationError(message) display.error(message) return False @staticmethod def get_version(command, warnings): """ :type command: list[str] :type warnings: list[text] :rtype: list[str] """ try: stdout, stderr = raw_command(command, capture=True, cmd_verbosity=2) except SubprocessError as ex: warnings.append(u'%s' % ex) return None # all failures are equal, we don't care why it failed, only that it did return [line.strip() for line in ((stdout or '').strip() + (stderr or '').strip()).splitlines()] @staticmethod def get_shebang(path): """ :type path: str :rtype: str """ with open_text_file(path) as script_fd: return script_fd.readline().strip() class NoChangesDetected(ApplicationWarning): """Exception when change detection was performed, but no changes were found.""" def __init__(self): super(NoChangesDetected, self).__init__('No changes detected.') class NoTestsForChanges(ApplicationWarning): """Exception when changes detected, but no tests trigger as a result.""" def __init__(self): super(NoTestsForChanges, self).__init__('No tests found for detected changes.') class Delegate(Exception): """Trigger command delegation.""" def __init__(self, exclude=None, require=None): """ :type exclude: list[str] | None :type require: list[str] | None """ super(Delegate, self).__init__() self.exclude = exclude or [] self.require = require or [] class AllTargetsSkipped(ApplicationWarning): """All targets skipped.""" def __init__(self): super(AllTargetsSkipped, self).__init__('All targets skipped.')
closed
ansible/ansible
https://github.com/ansible/ansible
61,185
htpasswd module does not properly handle creating files in check mode
##### SUMMARY When in check mode, the check_file_attrs function is still called. This calls `module.set_fs_attributes_if_different,` which eventually raises an exception in `module.set_mode_if_different` if the file doesn't exist. This situation can occur when running in check mode when the destination file does not yet exist, even if `create` is set to `yes`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME htpasswd ##### ANSIBLE VERSION ```paste below ansible 2.8.0 config file = /Users/matthieu/dev/ansible/ansible.cfg configured module search path = ['/Users/matthieu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/matthieu/.pyenv/versions/3.6.3/envs/ansible/lib/python3.6/site-packages/ansible executable location = /Users/matthieu/.pyenv/versions/ansible/bin/ansible python version = 3.6.3 (default, Oct 9 2017, 18:08:57) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] ``` ##### CONFIGURATION ```paste below ANSIBLE_PIPELINING(/Users/matthieu/dev/ansible/ansible.cfg) = True DEFAULT_LOG_PATH(/Users/matthieu/dev/ansible/ansible.cfg) = /Users/matthieu/dev/ansible/ansible.log DEFAULT_REMOTE_USER(/Users/matthieu/dev/ansible/ansible.cfg) = root ``` ##### OS / ENVIRONMENT From Mac to Linux 16.04 ##### STEPS TO REPRODUCE Run a playbook in check mode that contains this task: ```yaml - name: Set up basic auth credentials file htpasswd: create: yes crypt_scheme: apr_md5_crypt name: "testuser" password: "testpass" path: "/tmp/thisfiledoesnotexist.htpasswd" state: present ``` Please note a very similar bug was already fixed in v2.4 https://github.com/ansible/ansible/issues/32676 It may be worth to write a test case for this scenario... ##### EXPECTED RESULTS The check run should complete successfully with a note that the file will be created. ##### ACTUAL RESULTS I get the following error message: `[Errno 2] No such file or directory: '/tmp/thisfiledoesnotexist.htpasswd'` This is what I get when I run ansible-playbook with -vvv: ``` The full traceback is: WARNING: The below traceback may *not* be related to the actual failure. File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 268, in main check_file_attrs(module, changed, msg) File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 192, in check_file_attrs if module.set_fs_attributes_if_different(file_args, False): File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1339, in set_fs_attributes_if_different file_args['path'], file_args['mode'], changed, diff, expand File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1063, in set_mode_if_different path_stat = os.lstat(b_path) ```
https://github.com/ansible/ansible/issues/61185
https://github.com/ansible/ansible/pull/64279
b043afa025063fb452c8e01736c919cd2e7ef410
7099657dd7279ef2989d601251f46e7407a86fa6
2019-08-22T16:40:49Z
python
2021-04-28T08:17:03Z
changelogs/fragments/61185-basic.py-fix-check_mode.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
61,185
htpasswd module does not properly handle creating files in check mode
##### SUMMARY When in check mode, the check_file_attrs function is still called. This calls `module.set_fs_attributes_if_different,` which eventually raises an exception in `module.set_mode_if_different` if the file doesn't exist. This situation can occur when running in check mode when the destination file does not yet exist, even if `create` is set to `yes`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME htpasswd ##### ANSIBLE VERSION ```paste below ansible 2.8.0 config file = /Users/matthieu/dev/ansible/ansible.cfg configured module search path = ['/Users/matthieu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/matthieu/.pyenv/versions/3.6.3/envs/ansible/lib/python3.6/site-packages/ansible executable location = /Users/matthieu/.pyenv/versions/ansible/bin/ansible python version = 3.6.3 (default, Oct 9 2017, 18:08:57) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] ``` ##### CONFIGURATION ```paste below ANSIBLE_PIPELINING(/Users/matthieu/dev/ansible/ansible.cfg) = True DEFAULT_LOG_PATH(/Users/matthieu/dev/ansible/ansible.cfg) = /Users/matthieu/dev/ansible/ansible.log DEFAULT_REMOTE_USER(/Users/matthieu/dev/ansible/ansible.cfg) = root ``` ##### OS / ENVIRONMENT From Mac to Linux 16.04 ##### STEPS TO REPRODUCE Run a playbook in check mode that contains this task: ```yaml - name: Set up basic auth credentials file htpasswd: create: yes crypt_scheme: apr_md5_crypt name: "testuser" password: "testpass" path: "/tmp/thisfiledoesnotexist.htpasswd" state: present ``` Please note a very similar bug was already fixed in v2.4 https://github.com/ansible/ansible/issues/32676 It may be worth to write a test case for this scenario... ##### EXPECTED RESULTS The check run should complete successfully with a note that the file will be created. ##### ACTUAL RESULTS I get the following error message: `[Errno 2] No such file or directory: '/tmp/thisfiledoesnotexist.htpasswd'` This is what I get when I run ansible-playbook with -vvv: ``` The full traceback is: WARNING: The below traceback may *not* be related to the actual failure. File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 268, in main check_file_attrs(module, changed, msg) File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 192, in check_file_attrs if module.set_fs_attributes_if_different(file_args, False): File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1339, in set_fs_attributes_if_different file_args['path'], file_args['mode'], changed, diff, expand File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1063, in set_mode_if_different path_stat = os.lstat(b_path) ```
https://github.com/ansible/ansible/issues/61185
https://github.com/ansible/ansible/pull/64279
b043afa025063fb452c8e01736c919cd2e7ef410
7099657dd7279ef2989d601251f46e7407a86fa6
2019-08-22T16:40:49Z
python
2021-04-28T08:17:03Z
lib/ansible/module_utils/basic.py
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013 # Copyright (c), Toshio Kuratomi <[email protected]> 2016 # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type FILE_ATTRIBUTES = { 'A': 'noatime', 'a': 'append', 'c': 'compressed', 'C': 'nocow', 'd': 'nodump', 'D': 'dirsync', 'e': 'extents', 'E': 'encrypted', 'h': 'blocksize', 'i': 'immutable', 'I': 'indexed', 'j': 'journalled', 'N': 'inline', 's': 'zero', 'S': 'synchronous', 't': 'notail', 'T': 'blockroot', 'u': 'undelete', 'X': 'compressedraw', 'Z': 'compresseddirty', } # Ansible modules can be written in any language. # The functions available here can be used to do many common tasks, # to simplify development of Python modules. import __main__ import atexit import errno import datetime import grp import fcntl import locale import os import pwd import platform import re import select import shlex import shutil import signal import stat import subprocess import sys import tempfile import time import traceback import types from itertools import chain, repeat try: import syslog HAS_SYSLOG = True except ImportError: HAS_SYSLOG = False try: from systemd import journal # Makes sure that systemd.journal has method sendv() # Double check that journal has method sendv (some packages don't) has_journal = hasattr(journal, 'sendv') except ImportError: has_journal = False HAVE_SELINUX = False try: import ansible.module_utils.compat.selinux as selinux HAVE_SELINUX = True except ImportError: pass # Python2 & 3 way to get NoneType NoneType = type(None) from ansible.module_utils.compat import selectors from ._text import to_native, to_bytes, to_text from ansible.module_utils.common.text.converters import ( jsonify, container_to_bytes as json_dict_unicode_to_bytes, container_to_text as json_dict_bytes_to_unicode, ) from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator from ansible.module_utils.common.text.formatters import ( lenient_lowercase, bytes_to_human, human_to_bytes, SIZE_RANGES, ) try: from ansible.module_utils.common._json_compat import json except ImportError as e: print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e))) sys.exit(1) AVAILABLE_HASH_ALGORITHMS = dict() try: import hashlib # python 2.7.9+ and 2.7.0+ for attribute in ('available_algorithms', 'algorithms'): algorithms = getattr(hashlib, attribute, None) if algorithms: break if algorithms is None: # python 2.5+ algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') for algorithm in algorithms: AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm) # we may have been able to import md5 but it could still not be available try: hashlib.md5() except ValueError: AVAILABLE_HASH_ALGORITHMS.pop('md5', None) except Exception: import sha AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha} try: import md5 AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5 except Exception: pass from ansible.module_utils.common._collections_compat import ( KeysView, Mapping, MutableMapping, Sequence, MutableSequence, Set, MutableSet, ) from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.common.file import ( _PERM_BITS as PERM_BITS, _EXEC_PERM_BITS as EXEC_PERM_BITS, _DEFAULT_PERM as DEFAULT_PERM, is_executable, format_attributes, get_flags_from_attributes, ) from ansible.module_utils.common.sys_info import ( get_distribution, get_distribution_version, get_platform_subclass, ) from ansible.module_utils.pycompat24 import get_exception, literal_eval from ansible.module_utils.common.parameters import ( env_fallback, remove_values, sanitize_keys, DEFAULT_TYPE_VALIDATORS, PASS_VARS, PASS_BOOLS, ) from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError from ansible.module_utils.six import ( PY2, PY3, b, binary_type, integer_types, iteritems, string_types, text_type, ) from ansible.module_utils.six.moves import map, reduce, shlex_quote from ansible.module_utils.common.validation import ( check_missing_parameters, safe_eval, ) from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean from ansible.module_utils.common.warnings import ( deprecate, get_deprecation_messages, get_warning_messages, warn, ) # Note: When getting Sequence from collections, it matches with strings. If # this matters, make sure to check for strings before checking for sequencetype SEQUENCETYPE = frozenset, KeysView, Sequence PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I) imap = map try: # Python 2 unicode except NameError: # Python 3 unicode = text_type try: # Python 2 basestring except NameError: # Python 3 basestring = string_types _literal_eval = literal_eval # End of deprecated names # Internal global holding passed in params. This is consulted in case # multiple AnsibleModules are created. Otherwise each AnsibleModule would # attempt to read from stdin. Other code should not use this directly as it # is an internal implementation detail _ANSIBLE_ARGS = None FILE_COMMON_ARGUMENTS = dict( # These are things we want. About setting metadata (mode, ownership, permissions in general) on # created files (these are used by set_fs_attributes_if_different and included in # load_file_common_arguments) mode=dict(type='raw'), owner=dict(type='str'), group=dict(type='str'), seuser=dict(type='str'), serole=dict(type='str'), selevel=dict(type='str'), setype=dict(type='str'), attributes=dict(type='str', aliases=['attr']), unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move ) PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?') # Used for parsing symbolic file perms MODE_OPERATOR_RE = re.compile(r'[+=-]') USERS_RE = re.compile(r'[^ugo]') PERMS_RE = re.compile(r'[^rwxXstugo]') # Used for determining if the system is running a new enough python version # and should only restrict on our documented minimum versions _PY3_MIN = sys.version_info[:2] >= (3, 5) _PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,) _PY26 = (2, 6) == sys.version_info[:2] _PY_MIN = _PY3_MIN or _PY2_MIN if not _PY_MIN: print( '\n{"failed": true, ' '"msg": "ansible-core requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines()) ) sys.exit(1) if _PY26: deprecate( 'ansible-core 2.13 will require Python 2.7 or newer on the target. ' 'Current version: %s' % ''.join(sys.version.splitlines()), version='2.13', ) # # Deprecated functions # def get_platform(): ''' **Deprecated** Use :py:func:`platform.system` directly. :returns: Name of the platform the module is running on in a native string Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is the result of calling :py:func:`platform.system`. ''' return platform.system() # End deprecated functions # # Compat shims # def load_platform_subclass(cls, *args, **kwargs): """**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead""" platform_cls = get_platform_subclass(cls) return super(cls, platform_cls).__new__(platform_cls) def get_all_subclasses(cls): """**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead""" return list(_get_all_subclasses(cls)) # End compat shims def heuristic_log_sanitize(data, no_log_values=None): ''' Remove strings that look like passwords from log messages ''' # Currently filters: # user:pass@foo/whatever and http://username:pass@wherever/foo # This code has false positives and consumes parts of logs that are # not passwds # begin: start of a passwd containing string # end: end of a passwd containing string # sep: char between user and passwd # prev_begin: where in the overall string to start a search for # a passwd # sep_search_end: where in the string to end a search for the sep data = to_native(data) output = [] begin = len(data) prev_begin = begin sep = 1 while sep: # Find the potential end of a passwd try: end = data.rindex('@', 0, begin) except ValueError: # No passwd in the rest of the data output.insert(0, data[0:begin]) break # Search for the beginning of a passwd sep = None sep_search_end = end while not sep: # URL-style username+password try: begin = data.rindex('://', 0, sep_search_end) except ValueError: # No url style in the data, check for ssh style in the # rest of the string begin = 0 # Search for separator try: sep = data.index(':', begin + 3, end) except ValueError: # No separator; choices: if begin == 0: # Searched the whole string so there's no password # here. Return the remaining data output.insert(0, data[0:begin]) break # Search for a different beginning of the password field. sep_search_end = begin continue if sep: # Password was found; remove it. output.insert(0, data[end:prev_begin]) output.insert(0, '********') output.insert(0, data[begin:sep + 1]) prev_begin = begin output = ''.join(output) if no_log_values: output = remove_values(output, no_log_values) return output def _load_params(): ''' read the modules parameters and store them globally. This function may be needed for certain very dynamic custom modules which want to process the parameters that are being handed the module. Since this is so closely tied to the implementation of modules we cannot guarantee API stability for it (it may change between versions) however we will try not to break it gratuitously. It is certainly more future-proof to call this function and consume its outputs than to implement the logic inside it as a copy in your own code. ''' global _ANSIBLE_ARGS if _ANSIBLE_ARGS is not None: buffer = _ANSIBLE_ARGS else: # debug overrides to read args from file or cmdline # Avoid tracebacks when locale is non-utf8 # We control the args and we pass them as utf8 if len(sys.argv) > 1: if os.path.isfile(sys.argv[1]): fd = open(sys.argv[1], 'rb') buffer = fd.read() fd.close() else: buffer = sys.argv[1] if PY3: buffer = buffer.encode('utf-8', errors='surrogateescape') # default case, read from stdin else: if PY2: buffer = sys.stdin.read() else: buffer = sys.stdin.buffer.read() _ANSIBLE_ARGS = buffer try: params = json.loads(buffer.decode('utf-8')) except ValueError: # This helper used too early for fail_json to work. print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}') sys.exit(1) if PY2: params = json_dict_unicode_to_bytes(params) try: return params['ANSIBLE_MODULE_ARGS'] except KeyError: # This helper does not have access to fail_json so we have to print # json output on our own. print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", ' '"failed": true}') sys.exit(1) def missing_required_lib(library, reason=None, url=None): hostname = platform.node() msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable) if reason: msg += " This is required %s." % reason if url: msg += " See %s for more info." % url msg += (" Please read the module documentation and install it in the appropriate location." " If the required library is installed, but Ansible is using the wrong Python interpreter," " please consult the documentation on ansible_python_interpreter") return msg class AnsibleModule(object): def __init__(self, argument_spec, bypass_checks=False, no_log=False, mutually_exclusive=None, required_together=None, required_one_of=None, add_file_common_args=False, supports_check_mode=False, required_if=None, required_by=None): ''' Common code for quickly building an ansible module in Python (although you can write modules with anything that can return JSON). See :ref:`developing_modules_general` for a general introduction and :ref:`developing_program_flow_modules` for more detailed explanation. ''' self._name = os.path.basename(__file__) # initialize name until we can parse from options self.argument_spec = argument_spec self.supports_check_mode = supports_check_mode self.check_mode = False self.bypass_checks = bypass_checks self.no_log = no_log self.mutually_exclusive = mutually_exclusive self.required_together = required_together self.required_one_of = required_one_of self.required_if = required_if self.required_by = required_by self.cleanup_files = [] self._debug = False self._diff = False self._socket_path = None self._shell = None self._syslog_facility = 'LOG_USER' self._verbosity = 0 # May be used to set modifications to the environment for any # run_command invocation self.run_command_environ_update = {} self._clean = {} self._string_conversion_action = '' self.aliases = {} self._legal_inputs = [] self._options_context = list() self._tmpdir = None if add_file_common_args: for k, v in FILE_COMMON_ARGUMENTS.items(): if k not in self.argument_spec: self.argument_spec[k] = v # Save parameter values that should never be logged self.no_log_values = set() # check the locale as set by the current environment, and reset to # a known valid (LANG=C) if it's an invalid/unavailable locale self._check_locale() self._load_params() self._set_internal_properties() self.validator = ModuleArgumentSpecValidator(self.argument_spec, self.mutually_exclusive, self.required_together, self.required_one_of, self.required_if, self.required_by, ) self.validation_result = self.validator.validate(self.params) self.params.update(self.validation_result.validated_parameters) self.no_log_values.update(self.validation_result._no_log_values) try: error = self.validation_result.errors[0] except IndexError: error = None # Fail for validation errors, even in check mode if error: msg = self.validation_result.errors.msg if isinstance(error, UnsupportedError): msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg) self.fail_json(msg=msg) if self.check_mode and not self.supports_check_mode: self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name) # This is for backwards compatibility only. self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS if not self.no_log: self._log_invocation() # selinux state caching self._selinux_enabled = None self._selinux_mls_enabled = None self._selinux_initial_context = None # finally, make sure we're in a sane working dir self._set_cwd() @property def tmpdir(self): # if _ansible_tmpdir was not set and we have a remote_tmp, # the module needs to create it and clean it up once finished. # otherwise we create our own module tmp dir from the system defaults if self._tmpdir is None: basedir = None if self._remote_tmp is not None: basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp)) if basedir is not None and not os.path.exists(basedir): try: os.makedirs(basedir, mode=0o700) except (OSError, IOError) as e: self.warn("Unable to use %s as temporary directory, " "failing back to system: %s" % (basedir, to_native(e))) basedir = None else: self.warn("Module remote_tmp %s did not exist and was " "created with a mode of 0700, this may cause" " issues when running as another user. To " "avoid this, create the remote_tmp dir with " "the correct permissions manually" % basedir) basefile = "ansible-moduletmp-%s-" % time.time() try: tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir) except (OSError, IOError) as e: self.fail_json( msg="Failed to create remote module tmp path at dir %s " "with prefix %s: %s" % (basedir, basefile, to_native(e)) ) if not self._keep_remote_files: atexit.register(shutil.rmtree, tmpdir) self._tmpdir = tmpdir return self._tmpdir def warn(self, warning): warn(warning) self.log('[WARNING] %s' % warning) def deprecate(self, msg, version=None, date=None, collection_name=None): if version is not None and date is not None: raise AssertionError("implementation error -- version and date must not both be set") deprecate(msg, version=version, date=date, collection_name=collection_name) # For compatibility, we accept that neither version nor date is set, # and treat that the same as if version would haven been set if date is not None: self.log('[DEPRECATION WARNING] %s %s' % (msg, date)) else: self.log('[DEPRECATION WARNING] %s %s' % (msg, version)) def load_file_common_arguments(self, params, path=None): ''' many modules deal with files, this encapsulates common options that the file module accepts such that it is directly available to all modules and they can share code. Allows to overwrite the path/dest module argument by providing path. ''' if path is None: path = params.get('path', params.get('dest', None)) if path is None: return {} else: path = os.path.expanduser(os.path.expandvars(path)) b_path = to_bytes(path, errors='surrogate_or_strict') # if the path is a symlink, and we're following links, get # the target of the link instead for testing if params.get('follow', False) and os.path.islink(b_path): b_path = os.path.realpath(b_path) path = to_native(b_path) mode = params.get('mode', None) owner = params.get('owner', None) group = params.get('group', None) # selinux related options seuser = params.get('seuser', None) serole = params.get('serole', None) setype = params.get('setype', None) selevel = params.get('selevel', None) secontext = [seuser, serole, setype] if self.selinux_mls_enabled(): secontext.append(selevel) default_secontext = self.selinux_default_context(path) for i in range(len(default_secontext)): if i is not None and secontext[i] == '_default': secontext[i] = default_secontext[i] attributes = params.get('attributes', None) return dict( path=path, mode=mode, owner=owner, group=group, seuser=seuser, serole=serole, setype=setype, selevel=selevel, secontext=secontext, attributes=attributes, ) # Detect whether using selinux that is MLS-aware. # While this means you can set the level/range with # selinux.lsetfilecon(), it may or may not mean that you # will get the selevel as part of the context returned # by selinux.lgetfilecon(). def selinux_mls_enabled(self): if self._selinux_mls_enabled is None: self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1 return self._selinux_mls_enabled def selinux_enabled(self): if self._selinux_enabled is None: self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1 return self._selinux_enabled # Determine whether we need a placeholder for selevel/mls def selinux_initial_context(self): if self._selinux_initial_context is None: self._selinux_initial_context = [None, None, None] if self.selinux_mls_enabled(): self._selinux_initial_context.append(None) return self._selinux_initial_context # If selinux fails to find a default, return an array of None def selinux_default_context(self, path, mode=0): context = self.selinux_initial_context() if not self.selinux_enabled(): return context try: ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode) except OSError: return context if ret[0] == -1: return context # Limit split to 4 because the selevel, the last in the list, # may contain ':' characters context = ret[1].split(':', 3) return context def selinux_context(self, path): context = self.selinux_initial_context() if not self.selinux_enabled(): return context try: ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict')) except OSError as e: if e.errno == errno.ENOENT: self.fail_json(path=path, msg='path %s does not exist' % path) else: self.fail_json(path=path, msg='failed to retrieve selinux context') if ret[0] == -1: return context # Limit split to 4 because the selevel, the last in the list, # may contain ':' characters context = ret[1].split(':', 3) return context def user_and_group(self, path, expand=True): b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) st = os.lstat(b_path) uid = st.st_uid gid = st.st_gid return (uid, gid) def find_mount_point(self, path): ''' Takes a path and returns it's mount point :param path: a string type with a filesystem path :returns: the path to the mount point as a text type ''' b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict')) while not os.path.ismount(b_path): b_path = os.path.dirname(b_path) return to_text(b_path, errors='surrogate_or_strict') def is_special_selinux_path(self, path): """ Returns a tuple containing (True, selinux_context) if the given path is on a NFS or other 'special' fs mount point, otherwise the return will be (False, None). """ try: f = open('/proc/mounts', 'r') mount_data = f.readlines() f.close() except Exception: return (False, None) path_mount_point = self.find_mount_point(path) for line in mount_data: (device, mount_point, fstype, options, rest) = line.split(' ', 4) if to_bytes(path_mount_point) == to_bytes(mount_point): for fs in self._selinux_special_fs: if fs in fstype: special_context = self.selinux_context(path_mount_point) return (True, special_context) return (False, None) def set_default_selinux_context(self, path, changed): if not self.selinux_enabled(): return changed context = self.selinux_default_context(path) return self.set_context_if_different(path, context, False) def set_context_if_different(self, path, context, changed, diff=None): if not self.selinux_enabled(): return changed if self.check_file_absent_if_check_mode(path): return True cur_context = self.selinux_context(path) new_context = list(cur_context) # Iterate over the current context instead of the # argument context, which may have selevel. (is_special_se, sp_context) = self.is_special_selinux_path(path) if is_special_se: new_context = sp_context else: for i in range(len(cur_context)): if len(context) > i: if context[i] is not None and context[i] != cur_context[i]: new_context[i] = context[i] elif context[i] is None: new_context[i] = cur_context[i] if cur_context != new_context: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['secontext'] = cur_context if 'after' not in diff: diff['after'] = {} diff['after']['secontext'] = new_context try: if self.check_mode: return True rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context)) except OSError as e: self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e), new_context=new_context, cur_context=cur_context, input_was=context) if rc != 0: self.fail_json(path=path, msg='set selinux context failed') changed = True return changed def set_owner_if_different(self, path, owner, changed, diff=None, expand=True): if owner is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True orig_uid, orig_gid = self.user_and_group(b_path, expand) try: uid = int(owner) except ValueError: try: uid = pwd.getpwnam(owner).pw_uid except KeyError: path = to_text(b_path) self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner) if orig_uid != uid: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['owner'] = orig_uid if 'after' not in diff: diff['after'] = {} diff['after']['owner'] = uid if self.check_mode: return True try: os.lchown(b_path, uid, -1) except (IOError, OSError) as e: path = to_text(b_path) self.fail_json(path=path, msg='chown failed: %s' % (to_text(e))) changed = True return changed def set_group_if_different(self, path, group, changed, diff=None, expand=True): if group is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True orig_uid, orig_gid = self.user_and_group(b_path, expand) try: gid = int(group) except ValueError: try: gid = grp.getgrnam(group).gr_gid except KeyError: path = to_text(b_path) self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group) if orig_gid != gid: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['group'] = orig_gid if 'after' not in diff: diff['after'] = {} diff['after']['group'] = gid if self.check_mode: return True try: os.lchown(b_path, -1, gid) except OSError: path = to_text(b_path) self.fail_json(path=path, msg='chgrp failed') changed = True return changed def set_mode_if_different(self, path, mode, changed, diff=None, expand=True): if mode is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) path_stat = os.lstat(b_path) if self.check_file_absent_if_check_mode(b_path): return True if not isinstance(mode, int): try: mode = int(mode, 8) except Exception: try: mode = self._symbolic_mode_to_octal(path_stat, mode) except Exception as e: path = to_text(b_path) self.fail_json(path=path, msg="mode must be in octal or symbolic form", details=to_native(e)) if mode != stat.S_IMODE(mode): # prevent mode from having extra info orbeing invalid long number path = to_text(b_path) self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode) prev_mode = stat.S_IMODE(path_stat.st_mode) if prev_mode != mode: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['mode'] = '0%03o' % prev_mode if 'after' not in diff: diff['after'] = {} diff['after']['mode'] = '0%03o' % mode if self.check_mode: return True # FIXME: comparison against string above will cause this to be executed # every time try: if hasattr(os, 'lchmod'): os.lchmod(b_path, mode) else: if not os.path.islink(b_path): os.chmod(b_path, mode) else: # Attempt to set the perms of the symlink but be # careful not to change the perms of the underlying # file while trying underlying_stat = os.stat(b_path) os.chmod(b_path, mode) new_underlying_stat = os.stat(b_path) if underlying_stat.st_mode != new_underlying_stat.st_mode: os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode)) except OSError as e: if os.path.islink(b_path) and e.errno in ( errno.EACCES, # can't access symlink in sticky directory (stat) errno.EPERM, # can't set mode on symbolic links (chmod) errno.EROFS, # can't set mode on read-only filesystem ): pass elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links pass else: raise except Exception as e: path = to_text(b_path) self.fail_json(path=path, msg='chmod failed', details=to_native(e), exception=traceback.format_exc()) path_stat = os.lstat(b_path) new_mode = stat.S_IMODE(path_stat.st_mode) if new_mode != prev_mode: changed = True return changed def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True): if attributes is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True existing = self.get_file_attributes(b_path, include_version=False) attr_mod = '=' if attributes.startswith(('-', '+')): attr_mod = attributes[0] attributes = attributes[1:] if existing.get('attr_flags', '') != attributes or attr_mod == '-': attrcmd = self.get_bin_path('chattr') if attrcmd: attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path] changed = True if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['attributes'] = existing.get('attr_flags') if 'after' not in diff: diff['after'] = {} diff['after']['attributes'] = '%s%s' % (attr_mod, attributes) if not self.check_mode: try: rc, out, err = self.run_command(attrcmd) if rc != 0 or err: raise Exception("Error while setting attributes: %s" % (out + err)) except Exception as e: self.fail_json(path=to_text(b_path), msg='chattr failed', details=to_native(e), exception=traceback.format_exc()) return changed def get_file_attributes(self, path, include_version=True): output = {} attrcmd = self.get_bin_path('lsattr', False) if attrcmd: flags = '-vd' if include_version else '-d' attrcmd = [attrcmd, flags, path] try: rc, out, err = self.run_command(attrcmd) if rc == 0: res = out.split() attr_flags_idx = 0 if include_version: attr_flags_idx = 1 output['version'] = res[0].strip() output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip() output['attributes'] = format_attributes(output['attr_flags']) except Exception: pass return output @classmethod def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode): """ This enables symbolic chmod string parsing as stated in the chmod man-page This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X" """ new_mode = stat.S_IMODE(path_stat.st_mode) # Now parse all symbolic modes for mode in symbolic_mode.split(','): # Per single mode. This always contains a '+', '-' or '=' # Split it on that permlist = MODE_OPERATOR_RE.split(mode) # And find all the operators opers = MODE_OPERATOR_RE.findall(mode) # The user(s) where it's all about is the first element in the # 'permlist' list. Take that and remove it from the list. # An empty user or 'a' means 'all'. users = permlist.pop(0) use_umask = (users == '') if users == 'a' or users == '': users = 'ugo' # Check if there are illegal characters in the user list # They can end up in 'users' because they are not split if USERS_RE.match(users): raise ValueError("bad symbolic permission for mode: %s" % mode) # Now we have two list of equal length, one contains the requested # permissions and one with the corresponding operators. for idx, perms in enumerate(permlist): # Check if there are illegal characters in the permissions if PERMS_RE.match(perms): raise ValueError("bad symbolic permission for mode: %s" % mode) for user in users: mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask) new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode) return new_mode @staticmethod def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode): if operator == '=': if user == 'u': mask = stat.S_IRWXU | stat.S_ISUID elif user == 'g': mask = stat.S_IRWXG | stat.S_ISGID elif user == 'o': mask = stat.S_IRWXO | stat.S_ISVTX # mask out u, g, or o permissions from current_mode and apply new permissions inverse_mask = mask ^ PERM_BITS new_mode = (current_mode & inverse_mask) | mode_to_apply elif operator == '+': new_mode = current_mode | mode_to_apply elif operator == '-': new_mode = current_mode - (current_mode & mode_to_apply) return new_mode @staticmethod def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask): prev_mode = stat.S_IMODE(path_stat.st_mode) is_directory = stat.S_ISDIR(path_stat.st_mode) has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0 apply_X_permission = is_directory or has_x_permissions # Get the umask, if the 'user' part is empty, the effect is as if (a) were # given, but bits that are set in the umask are not affected. # We also need the "reversed umask" for masking umask = os.umask(0) os.umask(umask) rev_umask = umask ^ PERM_BITS # Permission bits constants documented at: # http://docs.python.org/2/library/stat.html#stat.S_ISUID if apply_X_permission: X_perms = { 'u': {'X': stat.S_IXUSR}, 'g': {'X': stat.S_IXGRP}, 'o': {'X': stat.S_IXOTH}, } else: X_perms = { 'u': {'X': 0}, 'g': {'X': 0}, 'o': {'X': 0}, } user_perms_to_modes = { 'u': { 'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR, 'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR, 'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR, 's': stat.S_ISUID, 't': 0, 'u': prev_mode & stat.S_IRWXU, 'g': (prev_mode & stat.S_IRWXG) << 3, 'o': (prev_mode & stat.S_IRWXO) << 6}, 'g': { 'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP, 'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP, 'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP, 's': stat.S_ISGID, 't': 0, 'u': (prev_mode & stat.S_IRWXU) >> 3, 'g': prev_mode & stat.S_IRWXG, 'o': (prev_mode & stat.S_IRWXO) << 3}, 'o': { 'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH, 'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH, 'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH, 's': 0, 't': stat.S_ISVTX, 'u': (prev_mode & stat.S_IRWXU) >> 6, 'g': (prev_mode & stat.S_IRWXG) >> 3, 'o': prev_mode & stat.S_IRWXO}, } # Insert X_perms into user_perms_to_modes for key, value in X_perms.items(): user_perms_to_modes[key].update(value) def or_reduce(mode, perm): return mode | user_perms_to_modes[user][perm] return reduce(or_reduce, perms, 0) def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True): # set modes owners and context as needed changed = self.set_context_if_different( file_args['path'], file_args['secontext'], changed, diff ) changed = self.set_owner_if_different( file_args['path'], file_args['owner'], changed, diff, expand ) changed = self.set_group_if_different( file_args['path'], file_args['group'], changed, diff, expand ) changed = self.set_mode_if_different( file_args['path'], file_args['mode'], changed, diff, expand ) changed = self.set_attributes_if_different( file_args['path'], file_args['attributes'], changed, diff, expand ) return changed def check_file_absent_if_check_mode(self, file_path): return self.check_mode and not os.path.exists(file_path) def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True): return self.set_fs_attributes_if_different(file_args, changed, diff, expand) def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True): return self.set_fs_attributes_if_different(file_args, changed, diff, expand) def add_path_info(self, kwargs): ''' for results that are files, supplement the info about the file in the return path with stats about the file path. ''' path = kwargs.get('path', kwargs.get('dest', None)) if path is None: return kwargs b_path = to_bytes(path, errors='surrogate_or_strict') if os.path.exists(b_path): (uid, gid) = self.user_and_group(path) kwargs['uid'] = uid kwargs['gid'] = gid try: user = pwd.getpwuid(uid)[0] except KeyError: user = str(uid) try: group = grp.getgrgid(gid)[0] except KeyError: group = str(gid) kwargs['owner'] = user kwargs['group'] = group st = os.lstat(b_path) kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE]) # secontext not yet supported if os.path.islink(b_path): kwargs['state'] = 'link' elif os.path.isdir(b_path): kwargs['state'] = 'directory' elif os.stat(b_path).st_nlink > 1: kwargs['state'] = 'hard' else: kwargs['state'] = 'file' if self.selinux_enabled(): kwargs['secontext'] = ':'.join(self.selinux_context(path)) kwargs['size'] = st[stat.ST_SIZE] return kwargs def _check_locale(self): ''' Uses the locale module to test the currently set locale (per the LANG and LC_CTYPE environment settings) ''' try: # setting the locale to '' uses the default locale # as it would be returned by locale.getdefaultlocale() locale.setlocale(locale.LC_ALL, '') except locale.Error: # fallback to the 'C' locale, which may cause unicode # issues but is preferable to simply failing because # of an unknown locale locale.setlocale(locale.LC_ALL, 'C') os.environ['LANG'] = 'C' os.environ['LC_ALL'] = 'C' os.environ['LC_MESSAGES'] = 'C' except Exception as e: self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" % to_native(e), exception=traceback.format_exc()) def _set_internal_properties(self, argument_spec=None, module_parameters=None): if argument_spec is None: argument_spec = self.argument_spec if module_parameters is None: module_parameters = self.params for k in PASS_VARS: # handle setting internal properties from internal ansible vars param_key = '_ansible_%s' % k if param_key in module_parameters: if k in PASS_BOOLS: setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key])) else: setattr(self, PASS_VARS[k][0], module_parameters[param_key]) # clean up internal top level params: if param_key in self.params: del self.params[param_key] else: # use defaults if not already set if not hasattr(self, PASS_VARS[k][0]): setattr(self, PASS_VARS[k][0], PASS_VARS[k][1]) def safe_eval(self, value, locals=None, include_exceptions=False): return safe_eval(value, locals, include_exceptions) def _load_params(self): ''' read the input and set the params attribute. This method is for backwards compatibility. The guts of the function were moved out in 2.1 so that custom modules could read the parameters. ''' # debug overrides to read args from file or cmdline self.params = _load_params() def _log_to_syslog(self, msg): if HAS_SYSLOG: try: module = 'ansible-%s' % self._name facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER) syslog.openlog(str(module), 0, facility) syslog.syslog(syslog.LOG_INFO, msg) except TypeError as e: self.fail_json( msg='Failed to log to syslog (%s). To proceed anyway, ' 'disable syslog logging by setting no_target_syslog ' 'to True in your Ansible config.' % to_native(e), exception=traceback.format_exc(), msg_to_log=msg, ) def debug(self, msg): if self._debug: self.log('[debug] %s' % msg) def log(self, msg, log_args=None): if not self.no_log: if log_args is None: log_args = dict() module = 'ansible-%s' % self._name if isinstance(module, binary_type): module = module.decode('utf-8', 'replace') # 6655 - allow for accented characters if not isinstance(msg, (binary_type, text_type)): raise TypeError("msg should be a string (got %s)" % type(msg)) # We want journal to always take text type # syslog takes bytes on py2, text type on py3 if isinstance(msg, binary_type): journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values) else: # TODO: surrogateescape is a danger here on Py3 journal_msg = remove_values(msg, self.no_log_values) if PY3: syslog_msg = journal_msg else: syslog_msg = journal_msg.encode('utf-8', 'replace') if has_journal: journal_args = [("MODULE", os.path.basename(__file__))] for arg in log_args: journal_args.append((arg.upper(), str(log_args[arg]))) try: if HAS_SYSLOG: # If syslog_facility specified, it needs to convert # from the facility name to the facility code, and # set it as SYSLOG_FACILITY argument of journal.send() facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER) >> 3 journal.send(MESSAGE=u"%s %s" % (module, journal_msg), SYSLOG_FACILITY=facility, **dict(journal_args)) else: journal.send(MESSAGE=u"%s %s" % (module, journal_msg), **dict(journal_args)) except IOError: # fall back to syslog since logging to journal failed self._log_to_syslog(syslog_msg) else: self._log_to_syslog(syslog_msg) def _log_invocation(self): ''' log that ansible ran the module ''' # TODO: generalize a separate log function and make log_invocation use it # Sanitize possible password argument when logging. log_args = dict() for param in self.params: canon = self.aliases.get(param, param) arg_opts = self.argument_spec.get(canon, {}) no_log = arg_opts.get('no_log', None) # try to proactively capture password/passphrase fields if no_log is None and PASSWORD_MATCH.search(param): log_args[param] = 'NOT_LOGGING_PASSWORD' self.warn('Module did not set no_log for %s' % param) elif self.boolean(no_log): log_args[param] = 'NOT_LOGGING_PARAMETER' else: param_val = self.params[param] if not isinstance(param_val, (text_type, binary_type)): param_val = str(param_val) elif isinstance(param_val, text_type): param_val = param_val.encode('utf-8') log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values) msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()] if msg: msg = 'Invoked with %s' % ' '.join(msg) else: msg = 'Invoked' self.log(msg, log_args=log_args) def _set_cwd(self): try: cwd = os.getcwd() if not os.access(cwd, os.F_OK | os.R_OK): raise Exception() return cwd except Exception: # we don't have access to the cwd, probably because of sudo. # Try and move to a neutral location to prevent errors for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]: try: if os.access(cwd, os.F_OK | os.R_OK): os.chdir(cwd) return cwd except Exception: pass # we won't error here, as it may *not* be a problem, # and we don't want to break modules unnecessarily return None def get_bin_path(self, arg, required=False, opt_dirs=None): ''' Find system executable in PATH. :param arg: The executable to find. :param required: if executable is not found and required is ``True``, fail_json :param opt_dirs: optional list of directories to search in addition to ``PATH`` :returns: if found return full path; otherwise return None ''' bin_path = None try: bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs) except ValueError as e: if required: self.fail_json(msg=to_text(e)) else: return bin_path return bin_path def boolean(self, arg): '''Convert the argument to a boolean''' if arg is None: return arg try: return boolean(arg) except TypeError as e: self.fail_json(msg=to_native(e)) def jsonify(self, data): try: return jsonify(data) except UnicodeError as e: self.fail_json(msg=to_text(e)) def from_json(self, data): return json.loads(data) def add_cleanup_file(self, path): if path not in self.cleanup_files: self.cleanup_files.append(path) def do_cleanup_files(self): for path in self.cleanup_files: self.cleanup(path) def _return_formatted(self, kwargs): self.add_path_info(kwargs) if 'invocation' not in kwargs: kwargs['invocation'] = {'module_args': self.params} if 'warnings' in kwargs: if isinstance(kwargs['warnings'], list): for w in kwargs['warnings']: self.warn(w) else: self.warn(kwargs['warnings']) warnings = get_warning_messages() if warnings: kwargs['warnings'] = warnings if 'deprecations' in kwargs: if isinstance(kwargs['deprecations'], list): for d in kwargs['deprecations']: if isinstance(d, SEQUENCETYPE) and len(d) == 2: self.deprecate(d[0], version=d[1]) elif isinstance(d, Mapping): self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'), collection_name=d.get('collection_name')) else: self.deprecate(d) # pylint: disable=ansible-deprecated-no-version else: self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version deprecations = get_deprecation_messages() if deprecations: kwargs['deprecations'] = deprecations kwargs = remove_values(kwargs, self.no_log_values) print('\n%s' % self.jsonify(kwargs)) def exit_json(self, **kwargs): ''' return from the module, without error ''' self.do_cleanup_files() self._return_formatted(kwargs) sys.exit(0) def fail_json(self, msg, **kwargs): ''' return from the module, with an error message ''' kwargs['failed'] = True kwargs['msg'] = msg # Add traceback if debug or high verbosity and it is missing # NOTE: Badly named as exception, it really always has been a traceback if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3): if PY2: # On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\ ''.join(traceback.format_tb(sys.exc_info()[2])) else: kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2])) self.do_cleanup_files() self._return_formatted(kwargs) sys.exit(1) def fail_on_missing_params(self, required_params=None): if not required_params: return try: check_missing_parameters(self.params, required_params) except TypeError as e: self.fail_json(msg=to_native(e)) def digest_from_file(self, filename, algorithm): ''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. ''' b_filename = to_bytes(filename, errors='surrogate_or_strict') if not os.path.exists(b_filename): return None if os.path.isdir(b_filename): self.fail_json(msg="attempted to take checksum of directory: %s" % filename) # preserve old behaviour where the third parameter was a hash algorithm object if hasattr(algorithm, 'hexdigest'): digest_method = algorithm else: try: digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]() except KeyError: self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" % (filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS))) blocksize = 64 * 1024 infile = open(os.path.realpath(b_filename), 'rb') block = infile.read(blocksize) while block: digest_method.update(block) block = infile.read(blocksize) infile.close() return digest_method.hexdigest() def md5(self, filename): ''' Return MD5 hex digest of local file using digest_from_file(). Do not use this function unless you have no other choice for: 1) Optional backwards compatibility 2) Compatibility with a third party protocol This function will not work on systems complying with FIPS-140-2. Most uses of this function can use the module.sha1 function instead. ''' if 'md5' not in AVAILABLE_HASH_ALGORITHMS: raise ValueError('MD5 not available. Possibly running in FIPS mode') return self.digest_from_file(filename, 'md5') def sha1(self, filename): ''' Return SHA1 hex digest of local file using digest_from_file(). ''' return self.digest_from_file(filename, 'sha1') def sha256(self, filename): ''' Return SHA-256 hex digest of local file using digest_from_file(). ''' return self.digest_from_file(filename, 'sha256') def backup_local(self, fn): '''make a date-marked backup of the specified file, return True or False on success or failure''' backupdest = '' if os.path.exists(fn): # backups named basename.PID.YYYY-MM-DD@HH:MM:SS~ ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time())) backupdest = '%s.%s.%s' % (fn, os.getpid(), ext) try: self.preserved_copy(fn, backupdest) except (shutil.Error, IOError) as e: self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e))) return backupdest def cleanup(self, tmpfile): if os.path.exists(tmpfile): try: os.unlink(tmpfile) except OSError as e: sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e))) def preserved_copy(self, src, dest): """Copy a file with preserved ownership, permissions and context""" # shutil.copy2(src, dst) # Similar to shutil.copy(), but metadata is copied as well - in fact, # this is just shutil.copy() followed by copystat(). This is similar # to the Unix command cp -p. # # shutil.copystat(src, dst) # Copy the permission bits, last access time, last modification time, # and flags from src to dst. The file contents, owner, and group are # unaffected. src and dst are path names given as strings. shutil.copy2(src, dest) # Set the context if self.selinux_enabled(): context = self.selinux_context(src) self.set_context_if_different(dest, context, False) # chown it try: dest_stat = os.stat(src) tmp_stat = os.stat(dest) if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid): os.chown(dest, dest_stat.st_uid, dest_stat.st_gid) except OSError as e: if e.errno != errno.EPERM: raise # Set the attributes current_attribs = self.get_file_attributes(src, include_version=False) current_attribs = current_attribs.get('attr_flags', '') self.set_attributes_if_different(dest, current_attribs, True) def atomic_move(self, src, dest, unsafe_writes=False): '''atomically move src to dest, copying attributes from dest, returns true on success it uses os.rename to ensure this as it is an atomic operation, rest of the function is to work around limitations, corner cases and ensure selinux context is saved if possible''' context = None dest_stat = None b_src = to_bytes(src, errors='surrogate_or_strict') b_dest = to_bytes(dest, errors='surrogate_or_strict') if os.path.exists(b_dest): try: dest_stat = os.stat(b_dest) # copy mode and ownership os.chmod(b_src, dest_stat.st_mode & PERM_BITS) os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid) # try to copy flags if possible if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'): try: os.chflags(b_src, dest_stat.st_flags) except OSError as e: for err in 'EOPNOTSUPP', 'ENOTSUP': if hasattr(errno, err) and e.errno == getattr(errno, err): break else: raise except OSError as e: if e.errno != errno.EPERM: raise if self.selinux_enabled(): context = self.selinux_context(dest) else: if self.selinux_enabled(): context = self.selinux_default_context(dest) creating = not os.path.exists(b_dest) try: # Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic. os.rename(b_src, b_dest) except (IOError, OSError) as e: if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]: # only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied) # and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc()) else: # Use bytes here. In the shippable CI, this fails with # a UnicodeError with surrogateescape'd strings for an unknown # reason (doesn't happen in a local Ubuntu16.04 VM) b_dest_dir = os.path.dirname(b_dest) b_suffix = os.path.basename(b_dest) error_msg = None tmp_dest_name = None try: tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix) except (OSError, IOError) as e: error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e)) except TypeError: # We expect that this is happening because python3.4.x and # below can't handle byte strings in mkstemp(). # Traceback would end in something like: # file = _os.path.join(dir, pre + name + suf) # TypeError: can't concat bytes to str error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. ' 'Please use Python2.x or Python3.5 or greater.') finally: if error_msg: if unsafe_writes: self._unsafe_writes(b_src, b_dest) else: self.fail_json(msg=error_msg, exception=traceback.format_exc()) if tmp_dest_name: b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict') try: try: # close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host) os.close(tmp_dest_fd) # leaves tmp file behind when sudo and not root try: shutil.move(b_src, b_tmp_dest_name) except OSError: # cleanup will happen by 'rm' of tmpdir # copy2 will preserve some metadata shutil.copy2(b_src, b_tmp_dest_name) if self.selinux_enabled(): self.set_context_if_different( b_tmp_dest_name, context, False) try: tmp_stat = os.stat(b_tmp_dest_name) if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid): os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid) except OSError as e: if e.errno != errno.EPERM: raise try: os.rename(b_tmp_dest_name, b_dest) except (shutil.Error, OSError, IOError) as e: if unsafe_writes and e.errno == errno.EBUSY: self._unsafe_writes(b_tmp_dest_name, b_dest) else: self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' % (src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc()) except (shutil.Error, OSError, IOError) as e: if unsafe_writes: self._unsafe_writes(b_src, b_dest) else: self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc()) finally: self.cleanup(b_tmp_dest_name) if creating: # make sure the file has the correct permissions # based on the current value of umask umask = os.umask(0) os.umask(umask) os.chmod(b_dest, DEFAULT_PERM & ~umask) try: os.chown(b_dest, os.geteuid(), os.getegid()) except OSError: # We're okay with trying our best here. If the user is not # root (or old Unices) they won't be able to chown. pass if self.selinux_enabled(): # rename might not preserve context self.set_context_if_different(dest, context, False) def _unsafe_writes(self, src, dest): # sadly there are some situations where we cannot ensure atomicity, but only if # the user insists and we get the appropriate error we update the file unsafely try: out_dest = in_src = None try: out_dest = open(dest, 'wb') in_src = open(src, 'rb') shutil.copyfileobj(in_src, out_dest) finally: # assuring closed files in 2.4 compatible way if out_dest: out_dest.close() if in_src: in_src.close() except (shutil.Error, OSError, IOError) as e: self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)), exception=traceback.format_exc()) def _clean_args(self, args): if not self._clean: # create a printable version of the command for use in reporting later, # which strips out things like passwords from the args list to_clean_args = args if PY2: if isinstance(args, text_type): to_clean_args = to_bytes(args) else: if isinstance(args, binary_type): to_clean_args = to_text(args) if isinstance(args, (text_type, binary_type)): to_clean_args = shlex.split(to_clean_args) clean_args = [] is_passwd = False for arg in (to_native(a) for a in to_clean_args): if is_passwd: is_passwd = False clean_args.append('********') continue if PASSWD_ARG_RE.match(arg): sep_idx = arg.find('=') if sep_idx > -1: clean_args.append('%s=********' % arg[:sep_idx]) continue else: is_passwd = True arg = heuristic_log_sanitize(arg, self.no_log_values) clean_args.append(arg) self._clean = ' '.join(shlex_quote(arg) for arg in clean_args) return self._clean def _restore_signal_handlers(self): # Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses. if PY2 and sys.platform != 'win32': signal.signal(signal.SIGPIPE, signal.SIG_DFL) def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict', expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True): ''' Execute a command, returns rc, stdout, and stderr. :arg args: is the command to run * If args is a list, the command will be run with shell=False. * If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False * If args is a string and use_unsafe_shell=True it runs with shell=True. :kw check_rc: Whether to call fail_json in case of non zero RC. Default False :kw close_fds: See documentation for subprocess.Popen(). Default True :kw executable: See documentation for subprocess.Popen(). Default None :kw data: If given, information to write to the stdin of the command :kw binary_data: If False, append a newline to the data. Default False :kw path_prefix: If given, additional path to find the command in. This adds to the PATH environment variable so helper commands in the same directory can also be found :kw cwd: If given, working directory to run the command inside :kw use_unsafe_shell: See `args` parameter. Default False :kw prompt_regex: Regex string (not a compiled regex) which can be used to detect prompts in the stdout which would otherwise cause the execution to hang (especially if no input data is specified) :kw environ_update: dictionary to *update* os.environ with :kw umask: Umask to be used when running the command. Default None :kw encoding: Since we return native strings, on python3 we need to know the encoding to use to transform from bytes to text. If you want to always get bytes back, use encoding=None. The default is "utf-8". This does not affect transformation of strings given as args. :kw errors: Since we return native strings, on python3 we need to transform stdout and stderr from bytes to text. If the bytes are undecodable in the ``encoding`` specified, then use this error handler to deal with them. The default is ``surrogate_or_strict`` which means that the bytes will be decoded using the surrogateescape error handler if available (available on all python3 versions we support) otherwise a UnicodeError traceback will be raised. This does not affect transformations of strings given as args. :kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument dictates whether ``~`` is expanded in paths and environment variables are expanded before running the command. When ``True`` a string such as ``$SHELL`` will be expanded regardless of escaping. When ``False`` and ``use_unsafe_shell=False`` no path or variable expansion will be done. :kw pass_fds: When running on Python 3 this argument dictates which file descriptors should be passed to an underlying ``Popen`` constructor. On Python 2, this will set ``close_fds`` to False. :kw before_communicate_callback: This function will be called after ``Popen`` object will be created but before communicating to the process. (``Popen`` object will be passed to callback as a first argument) :kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd`` (non-existent or not a directory) should be ignored or should raise an exception. :returns: A 3-tuple of return code (integer), stdout (native string), and stderr (native string). On python2, stdout and stderr are both byte strings. On python3, stdout and stderr are text strings converted according to the encoding and errors parameters. If you want byte strings on python3, use encoding=None to turn decoding to text off. ''' # used by clean args later on self._clean = None if not isinstance(args, (list, binary_type, text_type)): msg = "Argument 'args' to run_command must be list or string" self.fail_json(rc=257, cmd=args, msg=msg) shell = False if use_unsafe_shell: # stringify args for unsafe/direct shell usage if isinstance(args, list): args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args]) else: args = to_bytes(args, errors='surrogate_or_strict') # not set explicitly, check if set by controller if executable: executable = to_bytes(executable, errors='surrogate_or_strict') args = [executable, b'-c', args] elif self._shell not in (None, '/bin/sh'): args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args] else: shell = True else: # ensure args are a list if isinstance(args, (binary_type, text_type)): # On python2.6 and below, shlex has problems with text type # On python3, shlex needs a text type. if PY2: args = to_bytes(args, errors='surrogate_or_strict') elif PY3: args = to_text(args, errors='surrogateescape') args = shlex.split(args) # expand ``~`` in paths, and all environment vars if expand_user_and_vars: args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None] else: args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None] prompt_re = None if prompt_regex: if isinstance(prompt_regex, text_type): if PY3: prompt_regex = to_bytes(prompt_regex, errors='surrogateescape') elif PY2: prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict') try: prompt_re = re.compile(prompt_regex, re.MULTILINE) except re.error: self.fail_json(msg="invalid prompt regular expression given to run_command") rc = 0 msg = None st_in = None # Manipulate the environ we'll send to the new process old_env_vals = {} # We can set this from both an attribute and per call for key, val in self.run_command_environ_update.items(): old_env_vals[key] = os.environ.get(key, None) os.environ[key] = val if environ_update: for key, val in environ_update.items(): old_env_vals[key] = os.environ.get(key, None) os.environ[key] = val if path_prefix: path = os.environ.get('PATH', '') old_env_vals['PATH'] = path if path: os.environ['PATH'] = "%s:%s" % (path_prefix, path) else: os.environ['PATH'] = path_prefix # If using test-module.py and explode, the remote lib path will resemble: # /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py # If using ansible or ansible-playbook with a remote system: # /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py # Clean out python paths set by ansiballz if 'PYTHONPATH' in os.environ: pypaths = os.environ['PYTHONPATH'].split(':') pypaths = [x for x in pypaths if not x.endswith('/ansible_modlib.zip') and not x.endswith('/debug_dir')] os.environ['PYTHONPATH'] = ':'.join(pypaths) if not os.environ['PYTHONPATH']: del os.environ['PYTHONPATH'] if data: st_in = subprocess.PIPE kwargs = dict( executable=executable, shell=shell, close_fds=close_fds, stdin=st_in, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=self._restore_signal_handlers, ) if PY3 and pass_fds: kwargs["pass_fds"] = pass_fds elif PY2 and pass_fds: kwargs['close_fds'] = False # store the pwd prev_dir = os.getcwd() # make sure we're in the right working directory if cwd: if os.path.isdir(cwd): cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict') kwargs['cwd'] = cwd try: os.chdir(cwd) except (OSError, IOError) as e: self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)), exception=traceback.format_exc()) elif not ignore_invalid_cwd: self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd) old_umask = None if umask: old_umask = os.umask(umask) try: if self._debug: self.log('Executing: ' + self._clean_args(args)) cmd = subprocess.Popen(args, **kwargs) if before_communicate_callback: before_communicate_callback(cmd) # the communication logic here is essentially taken from that # of the _communicate() function in ssh.py stdout = b'' stderr = b'' try: selector = selectors.DefaultSelector() except (IOError, OSError): # Failed to detect default selector for the given platform # Select PollSelector which is supported by major platforms selector = selectors.PollSelector() selector.register(cmd.stdout, selectors.EVENT_READ) selector.register(cmd.stderr, selectors.EVENT_READ) if os.name == 'posix': fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK) fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK) if data: if not binary_data: data += '\n' if isinstance(data, text_type): data = to_bytes(data) cmd.stdin.write(data) cmd.stdin.close() while True: events = selector.select(1) for key, event in events: b_chunk = key.fileobj.read() if b_chunk == b(''): selector.unregister(key.fileobj) if key.fileobj == cmd.stdout: stdout += b_chunk elif key.fileobj == cmd.stderr: stderr += b_chunk # if we're checking for prompts, do it now if prompt_re: if prompt_re.search(stdout) and not data: if encoding: stdout = to_native(stdout, encoding=encoding, errors=errors) return (257, stdout, "A prompt was encountered while running a command, but no input data was specified") # only break out if no pipes are left to read or # the pipes are completely read and # the process is terminated if (not events or not selector.get_map()) and cmd.poll() is not None: break # No pipes are left to read but process is not yet terminated # Only then it is safe to wait for the process to be finished # NOTE: Actually cmd.poll() is always None here if no selectors are left elif not selector.get_map() and cmd.poll() is None: cmd.wait() # The process is terminated. Since no pipes to read from are # left, there is no need to call select() again. break cmd.stdout.close() cmd.stderr.close() selector.close() rc = cmd.returncode except (OSError, IOError) as e: self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e))) self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args)) except Exception as e: self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc()))) self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args)) # Restore env settings for key, val in old_env_vals.items(): if val is None: del os.environ[key] else: os.environ[key] = val if old_umask: os.umask(old_umask) if rc != 0 and check_rc: msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values) self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg) # reset the pwd os.chdir(prev_dir) if encoding is not None: return (rc, to_native(stdout, encoding=encoding, errors=errors), to_native(stderr, encoding=encoding, errors=errors)) return (rc, stdout, stderr) def append_to_file(self, filename, str): filename = os.path.expandvars(os.path.expanduser(filename)) fh = open(filename, 'a') fh.write(str) fh.close() def bytes_to_human(self, size): return bytes_to_human(size) # for backwards compatibility pretty_bytes = bytes_to_human def human_to_bytes(self, number, isbits=False): return human_to_bytes(number, isbits) # # Backwards compat # # In 2.0, moved from inside the module to the toplevel is_executable = is_executable @staticmethod def get_buffer_size(fd): try: # 1032 == FZ_GETPIPE_SZ buffer_size = fcntl.fcntl(fd, 1032) except Exception: try: # not as exact as above, but should be good enough for most platforms that fail the previous call buffer_size = select.PIPE_BUF except Exception: buffer_size = 9000 # use sane default JIC return buffer_size def get_module_path(): return os.path.dirname(os.path.realpath(__file__))
closed
ansible/ansible
https://github.com/ansible/ansible
61,185
htpasswd module does not properly handle creating files in check mode
##### SUMMARY When in check mode, the check_file_attrs function is still called. This calls `module.set_fs_attributes_if_different,` which eventually raises an exception in `module.set_mode_if_different` if the file doesn't exist. This situation can occur when running in check mode when the destination file does not yet exist, even if `create` is set to `yes`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME htpasswd ##### ANSIBLE VERSION ```paste below ansible 2.8.0 config file = /Users/matthieu/dev/ansible/ansible.cfg configured module search path = ['/Users/matthieu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/matthieu/.pyenv/versions/3.6.3/envs/ansible/lib/python3.6/site-packages/ansible executable location = /Users/matthieu/.pyenv/versions/ansible/bin/ansible python version = 3.6.3 (default, Oct 9 2017, 18:08:57) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] ``` ##### CONFIGURATION ```paste below ANSIBLE_PIPELINING(/Users/matthieu/dev/ansible/ansible.cfg) = True DEFAULT_LOG_PATH(/Users/matthieu/dev/ansible/ansible.cfg) = /Users/matthieu/dev/ansible/ansible.log DEFAULT_REMOTE_USER(/Users/matthieu/dev/ansible/ansible.cfg) = root ``` ##### OS / ENVIRONMENT From Mac to Linux 16.04 ##### STEPS TO REPRODUCE Run a playbook in check mode that contains this task: ```yaml - name: Set up basic auth credentials file htpasswd: create: yes crypt_scheme: apr_md5_crypt name: "testuser" password: "testpass" path: "/tmp/thisfiledoesnotexist.htpasswd" state: present ``` Please note a very similar bug was already fixed in v2.4 https://github.com/ansible/ansible/issues/32676 It may be worth to write a test case for this scenario... ##### EXPECTED RESULTS The check run should complete successfully with a note that the file will be created. ##### ACTUAL RESULTS I get the following error message: `[Errno 2] No such file or directory: '/tmp/thisfiledoesnotexist.htpasswd'` This is what I get when I run ansible-playbook with -vvv: ``` The full traceback is: WARNING: The below traceback may *not* be related to the actual failure. File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 268, in main check_file_attrs(module, changed, msg) File "/tmp/ansible_htpasswd_payload_b44M9M/__main__.py", line 192, in check_file_attrs if module.set_fs_attributes_if_different(file_args, False): File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1339, in set_fs_attributes_if_different file_args['path'], file_args['mode'], changed, diff, expand File "/tmp/ansible_htpasswd_payload_b44M9M/ansible_htpasswd_payload.zip/ansible/module_utils/basic.py", line 1063, in set_mode_if_different path_stat = os.lstat(b_path) ```
https://github.com/ansible/ansible/issues/61185
https://github.com/ansible/ansible/pull/64279
b043afa025063fb452c8e01736c919cd2e7ef410
7099657dd7279ef2989d601251f46e7407a86fa6
2019-08-22T16:40:49Z
python
2021-04-28T08:17:03Z
test/units/module_utils/basic/test_set_mode_if_different.py
# -*- coding: utf-8 -*- # (c) 2016, Toshio Kuratomi <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import errno import os from itertools import product try: import builtins except ImportError: import __builtin__ as builtins import pytest SYNONYMS_0660 = ( 0o660, '0o660', '660', 'u+rw-x,g+rw-x,o-rwx', 'u=rw,g=rw,o-rwx', ) @pytest.fixture def mock_stats(mocker): mock_stat1 = mocker.MagicMock() mock_stat1.st_mode = 0o444 mock_stat2 = mocker.MagicMock() mock_stat2.st_mode = 0o660 yield {"before": mock_stat1, "after": mock_stat2} @pytest.fixture def am_check_mode(am): am.check_mode = True yield am am.check_mode = False @pytest.fixture def mock_lchmod(mocker): m_lchmod = mocker.patch('ansible.module_utils.basic.os.lchmod', return_value=None, create=True) yield m_lchmod @pytest.mark.parametrize('previous_changes, check_mode, exists, stdin', product((True, False), (True, False), (True, False), ({},)), indirect=['stdin']) def test_no_mode_given_returns_previous_changes(am, mock_stats, mock_lchmod, mocker, previous_changes, check_mode, exists): am.check_mode = check_mode mocker.patch('os.lstat', side_effect=[mock_stats['before']]) m_lchmod = mocker.patch('os.lchmod', return_value=None, create=True) m_path_exists = mocker.patch('os.path.exists', return_value=exists) assert am.set_mode_if_different('/path/to/file', None, previous_changes) == previous_changes assert not m_lchmod.called assert not m_path_exists.called @pytest.mark.parametrize('mode, check_mode, stdin', product(SYNONYMS_0660, (True, False), ({},)), indirect=['stdin']) def test_mode_changed_to_0660(am, mock_stats, mocker, mode, check_mode): # Note: This is for checking that all the different ways of specifying # 0660 mode work. It cannot be used to check that setting a mode that is # not equivalent to 0660 works. am.check_mode = check_mode mocker.patch('os.lstat', side_effect=[mock_stats['before'], mock_stats['after'], mock_stats['after']]) m_lchmod = mocker.patch('os.lchmod', return_value=None, create=True) mocker.patch('os.path.exists', return_value=True) assert am.set_mode_if_different('/path/to/file', mode, False) if check_mode: assert not m_lchmod.called else: m_lchmod.assert_called_with(b'/path/to/file', 0o660) @pytest.mark.parametrize('mode, check_mode, stdin', product(SYNONYMS_0660, (True, False), ({},)), indirect=['stdin']) def test_mode_unchanged_when_already_0660(am, mock_stats, mocker, mode, check_mode): # Note: This is for checking that all the different ways of specifying # 0660 mode work. It cannot be used to check that setting a mode that is # not equivalent to 0660 works. am.check_mode = check_mode mocker.patch('os.lstat', side_effect=[mock_stats['after'], mock_stats['after'], mock_stats['after']]) m_lchmod = mocker.patch('os.lchmod', return_value=None, create=True) mocker.patch('os.path.exists', return_value=True) assert not am.set_mode_if_different('/path/to/file', mode, False) assert not m_lchmod.called @pytest.mark.parametrize('check_mode, stdin', product((True, False), ({},)), indirect=['stdin']) def test_missing_lchmod_is_not_link(am, mock_stats, mocker, monkeypatch, check_mode): """Some platforms have lchmod (*BSD) others do not (Linux)""" am.check_mode = check_mode original_hasattr = hasattr monkeypatch.delattr(os, 'lchmod', raising=False) mocker.patch('os.lstat', side_effect=[mock_stats['before'], mock_stats['after']]) mocker.patch('os.path.islink', return_value=False) mocker.patch('os.path.exists', return_value=True) m_chmod = mocker.patch('os.chmod', return_value=None) assert am.set_mode_if_different('/path/to/file/no_lchmod', 0o660, False) if check_mode: assert not m_chmod.called else: m_chmod.assert_called_with(b'/path/to/file/no_lchmod', 0o660) @pytest.mark.parametrize('check_mode, stdin', product((True, False), ({},)), indirect=['stdin']) def test_missing_lchmod_is_link(am, mock_stats, mocker, monkeypatch, check_mode): """Some platforms have lchmod (*BSD) others do not (Linux)""" am.check_mode = check_mode original_hasattr = hasattr monkeypatch.delattr(os, 'lchmod', raising=False) mocker.patch('os.lstat', side_effect=[mock_stats['before'], mock_stats['after']]) mocker.patch('os.path.islink', return_value=True) mocker.patch('os.path.exists', return_value=True) m_chmod = mocker.patch('os.chmod', return_value=None) mocker.patch('os.stat', return_value=mock_stats['after']) assert am.set_mode_if_different('/path/to/file/no_lchmod', 0o660, False) if check_mode: assert not m_chmod.called else: m_chmod.assert_called_with(b'/path/to/file/no_lchmod', 0o660) mocker.resetall() mocker.stopall() @pytest.mark.parametrize('stdin,', ({},), indirect=['stdin']) def test_missing_lchmod_is_link_in_sticky_dir(am, mock_stats, mocker): """Some platforms have lchmod (*BSD) others do not (Linux)""" am.check_mode = False original_hasattr = hasattr def _hasattr(obj, name): if obj == os and name == 'lchmod': return False return original_hasattr(obj, name) mock_lstat = mocker.MagicMock() mock_lstat.st_mode = 0o777 mocker.patch('os.lstat', side_effect=[mock_lstat, mock_lstat]) mocker.patch.object(builtins, 'hasattr', side_effect=_hasattr) mocker.patch('os.path.islink', return_value=True) mocker.patch('os.path.exists', return_value=True) m_stat = mocker.patch('os.stat', side_effect=OSError(errno.EACCES, 'Permission denied')) m_chmod = mocker.patch('os.chmod', return_value=None) # not changed: can't set mode on symbolic links assert not am.set_mode_if_different('/path/to/file/no_lchmod', 0o660, False) m_stat.assert_called_with(b'/path/to/file/no_lchmod') m_chmod.assert_not_called() mocker.resetall() mocker.stopall()
closed
ansible/ansible
https://github.com/ansible/ansible
67,067
Add example of Setup Module with delegated_facts
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Please update the documentation for the Setup module to include a note to check the Delegated Facts (https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#delegated-facts) documentation. I spent a long time trying to figure out how to set just the required facts for a group of servers remote servers and it was simple once I found the right document. Or better yet, include the example from the Delegated Facts in the Examples section of the Setup module. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME Setup module ##### ANSIBLE VERSION N/A ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### ADDITIONAL INFORMATION Directly related information would be super helpful and save a lot of Google Search time.
https://github.com/ansible/ansible/issues/67067
https://github.com/ansible/ansible/pull/74479
f194108a261ba015673916c07a76da094aaff3c1
7b03ebf939259710b44092cc780e5f02374dcab9
2020-02-04T00:02:53Z
python
2021-04-28T12:53:36Z
lib/ansible/modules/setup.py
#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: setup version_added: historical short_description: Gathers facts about remote hosts options: gather_subset: version_added: "2.1" description: - "If supplied, restrict the additional facts collected to the given subset. Possible values: C(all), C(min), C(hardware), C(network), C(virtual), C(ohai), and C(facter). Can specify a list of values to specify a larger subset. Values can also be used with an initial C(!) to specify that that specific subset should not be collected. For instance: C(!hardware,!network,!virtual,!ohai,!facter). If C(!all) is specified then only the min subset is collected. To avoid collecting even the min subset, specify C(!all,!min). To collect only specific facts, use C(!all,!min), and specify the particular fact subsets. Use the filter parameter if you do not want to display some collected facts." type: list elements: str default: "all" gather_timeout: version_added: "2.2" description: - Set the default timeout in seconds for individual fact gathering. type: int default: 10 filter: version_added: "1.1" description: - If supplied, only return facts that match one of the shell-style (fnmatch) pattern. An empty list basically means 'no filter'. As of Ansible 2.11, the type has changed from string to list and the default has became an empty list. A simple string is still accepted and works as a single pattern. The behaviour prior to Ansible 2.11 remains. type: list elements: str default: [] fact_path: version_added: "1.3" description: - Path used for local ansible facts (C(*.fact)) - files in this dir will be run (if executable) and their results be added to C(ansible_local) facts. If a file is not executable it is read instead. File/results format can be JSON or INI-format. The default C(fact_path) can be specified in C(ansible.cfg) for when setup is automatically called as part of C(gather_facts). NOTE - For windows clients, the results will be added to a variable named after the local file (without extension suffix), rather than C(ansible_local). - Since Ansible 2.1, Windows hosts can use C(fact_path). Make sure that this path exists on the target host. Files in this path MUST be PowerShell scripts C(.ps1) which outputs an object. This object will be formatted by Ansible as json so the script should be outputting a raw hashtable, array, or other primitive object. type: path default: /etc/ansible/facts.d description: - This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in playbooks. It can also be executed directly by C(/usr/bin/ansible) to check what variables are available to a host. Ansible provides many I(facts) about the system, automatically. - This module is also supported for Windows targets. notes: - More ansible facts will be added with successive releases. If I(facter) or I(ohai) are installed, variables from these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with C(facter_) and C(ohai_) so it's easy to tell their source. All variables are bubbled up to the caller. Using the ansible facts and choosing to not install I(facter) and I(ohai) means you can avoid Ruby-dependencies on your remote systems. (See also M(community.general.facter) and M(community.general.ohai).) - The filter option filters only the first level subkey below ansible_facts. - If the target host is Windows, you will not currently have the ability to use C(filter) as this is provided by a simpler implementation of the module. - This module is also supported for Windows targets. - This module should be run with elevated privileges on BSD systems to gather facts like ansible_product_version. - Supports C(check_mode). author: - "Ansible Core Team" - "Michael DeHaan" ''' EXAMPLES = """ # Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts). # ansible all -m ansible.builtin.setup --tree /tmp/facts # Display only facts regarding memory found by ansible on all hosts and output them. # ansible all -m ansible.builtin.setup -a 'filter=ansible_*_mb' # Display only facts returned by facter. # ansible all -m ansible.builtin.setup -a 'filter=facter_*' # Collect only facts returned by facter. # ansible all -m ansible.builtin.setup -a 'gather_subset=!all,!any,facter' - name: Collect only facts returned by facter ansible.builtin.setup: gather_subset: - '!all' - '!any' - facter - name: Collect only selected facts ansible.builtin.setup: filter: - 'ansible_distribution' - 'ansible_machine_id' - 'ansible_*_mb' # Display only facts about certain interfaces. # ansible all -m ansible.builtin.setup -a 'filter=ansible_eth[0-2]' # Restrict additional gathered facts to network and virtual (includes default minimum facts) # ansible all -m ansible.builtin.setup -a 'gather_subset=network,virtual' # Collect only network and virtual (excludes default minimum facts) # ansible all -m ansible.builtin.setup -a 'gather_subset=!all,!any,network,virtual' # Do not call puppet facter or ohai even if present. # ansible all -m ansible.builtin.setup -a 'gather_subset=!facter,!ohai' # Only collect the default minimum amount of facts: # ansible all -m ansible.builtin.setup -a 'gather_subset=!all' # Collect no facts, even the default minimum subset of facts: # ansible all -m ansible.builtin.setup -a 'gather_subset=!all,!min' # Display facts from Windows hosts with custom facts stored in C(C:\\custom_facts). # ansible windows -m ansible.builtin.setup -a "fact_path='c:\\custom_facts'" """ # import module snippets from ..module_utils.basic import AnsibleModule from ansible.module_utils._text import to_text from ansible.module_utils.facts import ansible_collector, default_collectors from ansible.module_utils.facts.collector import CollectorNotFoundError, CycleFoundInFactDeps, UnresolvedFactDep from ansible.module_utils.facts.namespace import PrefixFactNamespace def main(): module = AnsibleModule( argument_spec=dict( gather_subset=dict(default=["all"], required=False, type='list', elements='str'), gather_timeout=dict(default=10, required=False, type='int'), filter=dict(default=[], required=False, type='list', elements='str'), fact_path=dict(default='/etc/ansible/facts.d', required=False, type='path'), ), supports_check_mode=True, ) gather_subset = module.params['gather_subset'] gather_timeout = module.params['gather_timeout'] filter_spec = module.params['filter'] # TODO: this mimics existing behavior where gather_subset=["!all"] actually means # to collect nothing except for the below list # TODO: decide what '!all' means, I lean towards making it mean none, but likely needs # some tweaking on how gather_subset operations are performed minimal_gather_subset = frozenset(['apparmor', 'caps', 'cmdline', 'date_time', 'distribution', 'dns', 'env', 'fips', 'local', 'lsb', 'pkg_mgr', 'platform', 'python', 'selinux', 'service_mgr', 'ssh_pub_keys', 'user']) all_collector_classes = default_collectors.collectors # rename namespace_name to root_key? namespace = PrefixFactNamespace(namespace_name='ansible', prefix='ansible_') try: fact_collector = ansible_collector.get_ansible_collector(all_collector_classes=all_collector_classes, namespace=namespace, filter_spec=filter_spec, gather_subset=gather_subset, gather_timeout=gather_timeout, minimal_gather_subset=minimal_gather_subset) except (TypeError, CollectorNotFoundError, CycleFoundInFactDeps, UnresolvedFactDep) as e: # bad subset given, collector, idk, deps declared but not found module.fail_json(msg=to_text(e)) facts_dict = fact_collector.collect(module=module) module.exit_json(ansible_facts=facts_dict) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
74,081
The reboot module fails when using paramiko
### Summary When using paramiko and when doing a reboot then ansible will fail with "Failed to open session: SSH session not active". The target host is rebooted but ansible exits with error message. Using ansible-base 2.10.4 the reboot works. Using 2.10.5rc1 or newer the reboot does not work. Bug was caused by this PR https://github.com/ansible/ansible/pull/72688 Commenting out the lines if not self._connected: return from lib/ansible/plugins/connection/paramiko_ssh.py fixes the reboot. ### Issue Type Bug Report ### Component Name paramiko ### Ansible Version ```console (paste below) $ ansible --version ansible [core 2.11.0b4.post0] config file = /Users/kimmo/projects/ansible_bug/ansible.cfg configured module search path = ['/Users/kimmo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/lib/python3.9/site-packages/ansible ansible collection location = /Users/kimmo/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/bin/ansible python version = 3.9.2 (default, Mar 25 2021, 14:36:23) [Clang 12.0.0 (clang-1200.0.32.29)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console (paste below) $ ansible-config dump --only-changed DEFAULT_HOST_LIST(/Users/kimmo/projects/ansible_bug/ansible.cfg) = ['/Users/kimmo/projects/ansible_bug/hosts.ini'] DEFAULT_REMOTE_USER(/Users/kimmo/projects/ansible_bug/ansible.cfg) = root DEFAULT_TRANSPORT(/Users/kimmo/projects/ansible_bug/ansible.cfg) = paramiko DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kimmo/ansible-vault.pass DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = False ``` ### OS / Environment Target: debian 10 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) # Playbook: - name: Reboot hosts: all tasks: - name: Reboot reboot: ``` ### Expected Results I expect target host to reboot and ansible to return success. ### Actual Results Target host is rebooted but ansible reports failure. ```console (paste below) PLAY [Reboot] ************************************************************************************************************************************************************************************************** TASK [Reboot] ************************************************************************************************************************************************************************************************** fatal: [test.example.com]: FAILED! => {"msg": "Failed to open session: SSH session not active"} NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************* PLAY RECAP ***************************************************************************************************************************************************************************************************** test.example.com : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/74081
https://github.com/ansible/ansible/pull/74459
98495ae99db581e8707177169f54c0d5204ba6d2
74b2add460f25b83ae728b06e50d06321a2a9b79
2021-03-31T03:46:41Z
python
2021-04-29T19:11:02Z
changelogs/fragments/74081-paramiko-mark-connected.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,081
The reboot module fails when using paramiko
### Summary When using paramiko and when doing a reboot then ansible will fail with "Failed to open session: SSH session not active". The target host is rebooted but ansible exits with error message. Using ansible-base 2.10.4 the reboot works. Using 2.10.5rc1 or newer the reboot does not work. Bug was caused by this PR https://github.com/ansible/ansible/pull/72688 Commenting out the lines if not self._connected: return from lib/ansible/plugins/connection/paramiko_ssh.py fixes the reboot. ### Issue Type Bug Report ### Component Name paramiko ### Ansible Version ```console (paste below) $ ansible --version ansible [core 2.11.0b4.post0] config file = /Users/kimmo/projects/ansible_bug/ansible.cfg configured module search path = ['/Users/kimmo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/lib/python3.9/site-packages/ansible ansible collection location = /Users/kimmo/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/bin/ansible python version = 3.9.2 (default, Mar 25 2021, 14:36:23) [Clang 12.0.0 (clang-1200.0.32.29)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console (paste below) $ ansible-config dump --only-changed DEFAULT_HOST_LIST(/Users/kimmo/projects/ansible_bug/ansible.cfg) = ['/Users/kimmo/projects/ansible_bug/hosts.ini'] DEFAULT_REMOTE_USER(/Users/kimmo/projects/ansible_bug/ansible.cfg) = root DEFAULT_TRANSPORT(/Users/kimmo/projects/ansible_bug/ansible.cfg) = paramiko DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kimmo/ansible-vault.pass DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = False ``` ### OS / Environment Target: debian 10 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) # Playbook: - name: Reboot hosts: all tasks: - name: Reboot reboot: ``` ### Expected Results I expect target host to reboot and ansible to return success. ### Actual Results Target host is rebooted but ansible reports failure. ```console (paste below) PLAY [Reboot] ************************************************************************************************************************************************************************************************** TASK [Reboot] ************************************************************************************************************************************************************************************************** fatal: [test.example.com]: FAILED! => {"msg": "Failed to open session: SSH session not active"} NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************* PLAY RECAP ***************************************************************************************************************************************************************************************************** test.example.com : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/74081
https://github.com/ansible/ansible/pull/74459
98495ae99db581e8707177169f54c0d5204ba6d2
74b2add460f25b83ae728b06e50d06321a2a9b79
2021-03-31T03:46:41Z
python
2021-04-29T19:11:02Z
lib/ansible/plugins/connection/paramiko_ssh.py
# (c) 2012, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ author: Ansible Core Team name: paramiko short_description: Run tasks via python ssh (paramiko) description: - Use the python ssh implementation (Paramiko) to connect to targets - The paramiko transport is provided because many distributions, in particular EL6 and before do not support ControlPersist in their SSH implementations. - This is needed on the Ansible control machine to be reasonably efficient with connections. Thus paramiko is faster for most users on these platforms. Users with ControlPersist capability can consider using -c ssh or configuring the transport in the configuration file. - This plugin also borrows a lot of settings from the ssh plugin as they both cover the same protocol. version_added: "0.1" options: remote_addr: description: - Address of the remote target default: inventory_hostname vars: - name: ansible_host - name: ansible_ssh_host - name: ansible_paramiko_host remote_user: description: - User to login/authenticate as - Can be set from the CLI via the C(--user) or C(-u) options. vars: - name: ansible_user - name: ansible_ssh_user - name: ansible_paramiko_user env: - name: ANSIBLE_REMOTE_USER - name: ANSIBLE_PARAMIKO_REMOTE_USER version_added: '2.5' ini: - section: defaults key: remote_user - section: paramiko_connection key: remote_user version_added: '2.5' password: description: - Secret used to either login the ssh server or as a passphrase for ssh keys that require it - Can be set from the CLI via the C(--ask-pass) option. vars: - name: ansible_password - name: ansible_ssh_pass - name: ansible_ssh_password - name: ansible_paramiko_pass - name: ansible_paramiko_password version_added: '2.5' host_key_auto_add: description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean look_for_keys: default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean proxy_command: default: '' description: - Proxy information for running the connection via a jumphost - Also this plugin will scan 'ssh_args', 'ssh_extra_args' and 'ssh_common_args' from the 'ssh' plugin settings for proxy information if set. env: [{name: ANSIBLE_PARAMIKO_PROXY_COMMAND}] ini: - {key: proxy_command, section: paramiko_connection} pty: default: True description: 'TODO: write it' env: - name: ANSIBLE_PARAMIKO_PTY ini: - section: paramiko_connection key: pty type: boolean record_host_keys: default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_RECORD_HOST_KEYS}] ini: - section: paramiko_connection key: record_host_keys type: boolean host_key_checking: description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' type: boolean default: True env: - name: ANSIBLE_HOST_KEY_CHECKING - name: ANSIBLE_SSH_HOST_KEY_CHECKING version_added: '2.5' - name: ANSIBLE_PARAMIKO_HOST_KEY_CHECKING version_added: '2.5' ini: - section: defaults key: host_key_checking - section: paramiko_connection key: host_key_checking version_added: '2.5' vars: - name: ansible_host_key_checking version_added: '2.5' - name: ansible_ssh_host_key_checking version_added: '2.5' - name: ansible_paramiko_host_key_checking version_added: '2.5' use_persistent_connections: description: 'Toggles the use of persistence for connections' type: boolean default: False env: - name: ANSIBLE_USE_PERSISTENT_CONNECTIONS ini: - section: defaults key: use_persistent_connections # TODO: #timeout=self._play_context.timeout, """ import os import socket import tempfile import traceback import fcntl import sys import re from termios import tcflush, TCIFLUSH from distutils.version import LooseVersion from binascii import hexlify from ansible.errors import ( AnsibleAuthenticationFailure, AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound, ) from ansible.module_utils.compat.paramiko import PARAMIKO_IMPORT_ERR, paramiko from ansible.module_utils.six import iteritems from ansible.module_utils.six.moves import input from ansible.plugins.connection import ConnectionBase from ansible.utils.display import Display from ansible.utils.path import makedirs_safe from ansible.module_utils._text import to_bytes, to_native, to_text display = Display() AUTHENTICITY_MSG = """ paramiko: The authenticity of host '%s' can't be established. The %s key fingerprint is %s. Are you sure you want to continue connecting (yes/no)? """ # SSH Options Regex SETTINGS_REGEX = re.compile(r'(\w+)(?:\s*=\s*|\s+)(.+)') class MyAddPolicy(object): """ Based on AutoAddPolicy in paramiko so we can determine when keys are added and also prompt for input. Policy for automatically adding the hostname and new host key to the local L{HostKeys} object, and saving it. This is used by L{SSHClient}. """ def __init__(self, new_stdin, connection): self._new_stdin = new_stdin self.connection = connection self._options = connection._options def missing_host_key(self, client, hostname, key): if all((self._options['host_key_checking'], not self._options['host_key_auto_add'])): fingerprint = hexlify(key.get_fingerprint()) ktype = key.get_name() if self.connection.get_option('use_persistent_connections') or self.connection.force_persistence: # don't print the prompt string since the user cannot respond # to the question anyway raise AnsibleError(AUTHENTICITY_MSG[1:92] % (hostname, ktype, fingerprint)) self.connection.connection_lock() old_stdin = sys.stdin sys.stdin = self._new_stdin # clear out any premature input on sys.stdin tcflush(sys.stdin, TCIFLUSH) inp = input(AUTHENTICITY_MSG % (hostname, ktype, fingerprint)) sys.stdin = old_stdin self.connection.connection_unlock() if inp not in ['yes', 'y', '']: raise AnsibleError("host connection rejected by user") key._added_by_ansible_this_time = True # existing implementation below: client._host_keys.add(hostname, key.get_name(), key) # host keys are actually saved in close() function below # in order to control ordering. # keep connection objects on a per host basis to avoid repeated attempts to reconnect SSH_CONNECTION_CACHE = {} SFTP_CONNECTION_CACHE = {} class Connection(ConnectionBase): ''' SSH based connections with Paramiko ''' transport = 'paramiko' _log_channel = None def _cache_key(self): return "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user) def _connect(self): cache_key = self._cache_key() if cache_key in SSH_CONNECTION_CACHE: self.ssh = SSH_CONNECTION_CACHE[cache_key] else: self.ssh = SSH_CONNECTION_CACHE[cache_key] = self._connect_uncached() return self def _set_log_channel(self, name): '''Mimic paramiko.SSHClient.set_log_channel''' self._log_channel = name def _parse_proxy_command(self, port=22): proxy_command = None # Parse ansible_ssh_common_args, specifically looking for ProxyCommand ssh_args = [ getattr(self._play_context, 'ssh_extra_args', '') or '', getattr(self._play_context, 'ssh_common_args', '') or '', getattr(self._play_context, 'ssh_args', '') or '', ] args = self._split_ssh_args(' '.join(ssh_args)) for i, arg in enumerate(args): if arg.lower() == 'proxycommand': # _split_ssh_args split ProxyCommand from the command itself proxy_command = args[i + 1] else: # ProxyCommand and the command itself are a single string match = SETTINGS_REGEX.match(arg) if match: if match.group(1).lower() == 'proxycommand': proxy_command = match.group(2) if proxy_command: break proxy_command = proxy_command or self.get_option('proxy_command') sock_kwarg = {} if proxy_command: replacers = { '%h': self._play_context.remote_addr, '%p': port, '%r': self._play_context.remote_user } for find, replace in replacers.items(): proxy_command = proxy_command.replace(find, str(replace)) try: sock_kwarg = {'sock': paramiko.ProxyCommand(proxy_command)} display.vvv("CONFIGURE PROXY COMMAND FOR CONNECTION: %s" % proxy_command, host=self._play_context.remote_addr) except AttributeError: display.warning('Paramiko ProxyCommand support unavailable. ' 'Please upgrade to Paramiko 1.9.0 or newer. ' 'Not using configured ProxyCommand') return sock_kwarg def _connect_uncached(self): ''' activates the connection object ''' if paramiko is None: raise AnsibleError("paramiko is not installed: %s" % to_native(PARAMIKO_IMPORT_ERR)) port = self._play_context.port or 22 display.vvv("ESTABLISH PARAMIKO SSH CONNECTION FOR USER: %s on PORT %s TO %s" % (self._play_context.remote_user, port, self._play_context.remote_addr), host=self._play_context.remote_addr) ssh = paramiko.SSHClient() # override paramiko's default logger name if self._log_channel is not None: ssh.set_log_channel(self._log_channel) self.keyfile = os.path.expanduser("~/.ssh/known_hosts") if self.get_option('host_key_checking'): for ssh_known_hosts in ("/etc/ssh/ssh_known_hosts", "/etc/openssh/ssh_known_hosts"): try: # TODO: check if we need to look at several possible locations, possible for loop ssh.load_system_host_keys(ssh_known_hosts) break except IOError: pass # file was not found, but not required to function ssh.load_system_host_keys() ssh_connect_kwargs = self._parse_proxy_command(port) ssh.set_missing_host_key_policy(MyAddPolicy(self._new_stdin, self)) conn_password = self.get_option('password') or self._play_context.password allow_agent = True if conn_password is not None: allow_agent = False try: key_filename = None if self._play_context.private_key_file: key_filename = os.path.expanduser(self._play_context.private_key_file) # paramiko 2.2 introduced auth_timeout parameter if LooseVersion(paramiko.__version__) >= LooseVersion('2.2.0'): ssh_connect_kwargs['auth_timeout'] = self._play_context.timeout ssh.connect( self._play_context.remote_addr.lower(), username=self._play_context.remote_user, allow_agent=allow_agent, look_for_keys=self.get_option('look_for_keys'), key_filename=key_filename, password=conn_password, timeout=self._play_context.timeout, port=port, **ssh_connect_kwargs ) except paramiko.ssh_exception.BadHostKeyException as e: raise AnsibleConnectionFailure('host key mismatch for %s' % e.hostname) except paramiko.ssh_exception.AuthenticationException as e: msg = 'Failed to authenticate: {0}'.format(to_text(e)) raise AnsibleAuthenticationFailure(msg) except Exception as e: msg = to_text(e) if u"PID check failed" in msg: raise AnsibleError("paramiko version issue, please upgrade paramiko on the machine running ansible") elif u"Private key file is encrypted" in msg: msg = 'ssh %s@%s:%s : %s\nTo connect as a different user, use -u <username>.' % ( self._play_context.remote_user, self._play_context.remote_addr, port, msg) raise AnsibleConnectionFailure(msg) else: raise AnsibleConnectionFailure(msg) return ssh def exec_command(self, cmd, in_data=None, sudoable=True): ''' run a command on the remote host ''' super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) if in_data: raise AnsibleError("Internal Error: this module does not support optimized module pipelining") bufsize = 4096 try: self.ssh.get_transport().set_keepalive(5) chan = self.ssh.get_transport().open_session() except Exception as e: text_e = to_text(e) msg = u"Failed to open session" if text_e: msg += u": %s" % text_e raise AnsibleConnectionFailure(to_native(msg)) # sudo usually requires a PTY (cf. requiretty option), therefore # we give it one by default (pty=True in ansible.cfg), and we try # to initialise from the calling environment when sudoable is enabled if self.get_option('pty') and sudoable: chan.get_pty(term=os.getenv('TERM', 'vt100'), width=int(os.getenv('COLUMNS', 0)), height=int(os.getenv('LINES', 0))) display.vvv("EXEC %s" % cmd, host=self._play_context.remote_addr) cmd = to_bytes(cmd, errors='surrogate_or_strict') no_prompt_out = b'' no_prompt_err = b'' become_output = b'' try: chan.exec_command(cmd) if self.become and self.become.expect_prompt(): passprompt = False become_sucess = False while not (become_sucess or passprompt): display.debug('Waiting for Privilege Escalation input') chunk = chan.recv(bufsize) display.debug("chunk is: %s" % chunk) if not chunk: if b'unknown user' in become_output: n_become_user = to_native(self.become.get_option('become_user', playcontext=self._play_context)) raise AnsibleError('user %s does not exist' % n_become_user) else: break # raise AnsibleError('ssh connection closed waiting for password prompt') become_output += chunk # need to check every line because we might get lectured # and we might get the middle of a line in a chunk for l in become_output.splitlines(True): if self.become.check_success(l): become_sucess = True break elif self.become.check_password_prompt(l): passprompt = True break if passprompt: if self.become: become_pass = self.become.get_option('become_pass', playcontext=self._play_context) chan.sendall(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n') else: raise AnsibleError("A password is required but none was supplied") else: no_prompt_out += become_output no_prompt_err += become_output except socket.timeout: raise AnsibleError('ssh timed out waiting for privilege escalation.\n' + become_output) stdout = b''.join(chan.makefile('rb', bufsize)) stderr = b''.join(chan.makefile_stderr('rb', bufsize)) return (chan.recv_exit_status(), no_prompt_out + stdout, no_prompt_out + stderr) def put_file(self, in_path, out_path): ''' transfer a file from local to remote ''' super(Connection, self).put_file(in_path, out_path) display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr) if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')): raise AnsibleFileNotFound("file or module does not exist: %s" % in_path) try: self.sftp = self.ssh.open_sftp() except Exception as e: raise AnsibleError("failed to open a SFTP connection (%s)" % e) try: self.sftp.put(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict')) except IOError: raise AnsibleError("failed to transfer file to %s" % out_path) def _connect_sftp(self): cache_key = "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user) if cache_key in SFTP_CONNECTION_CACHE: return SFTP_CONNECTION_CACHE[cache_key] else: result = SFTP_CONNECTION_CACHE[cache_key] = self._connect().ssh.open_sftp() return result def fetch_file(self, in_path, out_path): ''' save a remote file to the specified path ''' super(Connection, self).fetch_file(in_path, out_path) display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr) try: self.sftp = self._connect_sftp() except Exception as e: raise AnsibleError("failed to open a SFTP connection (%s)" % to_native(e)) try: self.sftp.get(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict')) except IOError: raise AnsibleError("failed to transfer file from %s" % in_path) def _any_keys_added(self): for hostname, keys in iteritems(self.ssh._host_keys): for keytype, key in iteritems(keys): added_this_time = getattr(key, '_added_by_ansible_this_time', False) if added_this_time: return True return False def _save_ssh_host_keys(self, filename): ''' not using the paramiko save_ssh_host_keys function as we want to add new SSH keys at the bottom so folks don't complain about it :) ''' if not self._any_keys_added(): return False path = os.path.expanduser("~/.ssh") makedirs_safe(path) with open(filename, 'w') as f: for hostname, keys in iteritems(self.ssh._host_keys): for keytype, key in iteritems(keys): # was f.write added_this_time = getattr(key, '_added_by_ansible_this_time', False) if not added_this_time: f.write("%s %s %s\n" % (hostname, keytype, key.get_base64())) for hostname, keys in iteritems(self.ssh._host_keys): for keytype, key in iteritems(keys): added_this_time = getattr(key, '_added_by_ansible_this_time', False) if added_this_time: f.write("%s %s %s\n" % (hostname, keytype, key.get_base64())) def reset(self): if not self._connected: return self.close() self._connect() def close(self): ''' terminate the connection ''' cache_key = self._cache_key() SSH_CONNECTION_CACHE.pop(cache_key, None) SFTP_CONNECTION_CACHE.pop(cache_key, None) if hasattr(self, 'sftp'): if self.sftp is not None: self.sftp.close() if self.get_option('host_key_checking') and self.get_option('record_host_keys') and self._any_keys_added(): # add any new SSH host keys -- warning -- this could be slow # (This doesn't acquire the connection lock because it needs # to exclude only other known_hosts writers, not connections # that are starting up.) lockfile = self.keyfile.replace("known_hosts", ".known_hosts.lock") dirname = os.path.dirname(self.keyfile) makedirs_safe(dirname) KEY_LOCK = open(lockfile, 'w') fcntl.lockf(KEY_LOCK, fcntl.LOCK_EX) try: # just in case any were added recently self.ssh.load_system_host_keys() self.ssh._host_keys.update(self.ssh._system_host_keys) # gather information about the current key file, so # we can ensure the new file has the correct mode/owner key_dir = os.path.dirname(self.keyfile) if os.path.exists(self.keyfile): key_stat = os.stat(self.keyfile) mode = key_stat.st_mode uid = key_stat.st_uid gid = key_stat.st_gid else: mode = 33188 uid = os.getuid() gid = os.getgid() # Save the new keys to a temporary file and move it into place # rather than rewriting the file. We set delete=False because # the file will be moved into place rather than cleaned up. tmp_keyfile = tempfile.NamedTemporaryFile(dir=key_dir, delete=False) os.chmod(tmp_keyfile.name, mode & 0o7777) os.chown(tmp_keyfile.name, uid, gid) self._save_ssh_host_keys(tmp_keyfile.name) tmp_keyfile.close() os.rename(tmp_keyfile.name, self.keyfile) except Exception: # unable to save keys, including scenario when key was invalid # and caught earlier traceback.print_exc() fcntl.lockf(KEY_LOCK, fcntl.LOCK_UN) self.ssh.close() self._connected = False
closed
ansible/ansible
https://github.com/ansible/ansible
74,081
The reboot module fails when using paramiko
### Summary When using paramiko and when doing a reboot then ansible will fail with "Failed to open session: SSH session not active". The target host is rebooted but ansible exits with error message. Using ansible-base 2.10.4 the reboot works. Using 2.10.5rc1 or newer the reboot does not work. Bug was caused by this PR https://github.com/ansible/ansible/pull/72688 Commenting out the lines if not self._connected: return from lib/ansible/plugins/connection/paramiko_ssh.py fixes the reboot. ### Issue Type Bug Report ### Component Name paramiko ### Ansible Version ```console (paste below) $ ansible --version ansible [core 2.11.0b4.post0] config file = /Users/kimmo/projects/ansible_bug/ansible.cfg configured module search path = ['/Users/kimmo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/lib/python3.9/site-packages/ansible ansible collection location = /Users/kimmo/.ansible/collections:/usr/share/ansible/collections executable location = /Users/kimmo/Library/Caches/pypoetry/virtualenvs/ansible-bug-RRBfm9wT-py3.9/bin/ansible python version = 3.9.2 (default, Mar 25 2021, 14:36:23) [Clang 12.0.0 (clang-1200.0.32.29)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console (paste below) $ ansible-config dump --only-changed DEFAULT_HOST_LIST(/Users/kimmo/projects/ansible_bug/ansible.cfg) = ['/Users/kimmo/projects/ansible_bug/hosts.ini'] DEFAULT_REMOTE_USER(/Users/kimmo/projects/ansible_bug/ansible.cfg) = root DEFAULT_TRANSPORT(/Users/kimmo/projects/ansible_bug/ansible.cfg) = paramiko DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /Users/kimmo/ansible-vault.pass DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = False ``` ### OS / Environment Target: debian 10 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) # Playbook: - name: Reboot hosts: all tasks: - name: Reboot reboot: ``` ### Expected Results I expect target host to reboot and ansible to return success. ### Actual Results Target host is rebooted but ansible reports failure. ```console (paste below) PLAY [Reboot] ************************************************************************************************************************************************************************************************** TASK [Reboot] ************************************************************************************************************************************************************************************************** fatal: [test.example.com]: FAILED! => {"msg": "Failed to open session: SSH session not active"} NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************* PLAY RECAP ***************************************************************************************************************************************************************************************************** test.example.com : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/74081
https://github.com/ansible/ansible/pull/74459
98495ae99db581e8707177169f54c0d5204ba6d2
74b2add460f25b83ae728b06e50d06321a2a9b79
2021-03-31T03:46:41Z
python
2021-04-29T19:11:02Z
test/units/plugins/connection/test_paramiko.py
# # (c) 2020 Red Hat Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from io import StringIO import pytest from units.compat import unittest from ansible.plugins.connection import paramiko_ssh from ansible.playbook.play_context import PlayContext class TestParamikoConnectionClass(unittest.TestCase): def test_paramiko_connection_module(self): play_context = PlayContext() play_context.prompt = ( '[sudo via ansible, key=ouzmdnewuhucvuaabtjmweasarviygqq] password: ' ) in_stream = StringIO() self.assertIsInstance( paramiko_ssh.Connection(play_context, in_stream), paramiko_ssh.Connection)
closed
ansible/ansible
https://github.com/ansible/ansible
74,524
Ignoring ~/.ssh/config configuration file
### Summary After upgrade from "ansible-base(2.10.8)" to "ansible-core(2.11.0)" Ansible is ignoring "~/.ssh/config" configuration file. ### Issue Type Bug Report ### Component Name ansible-core ### Ansible Version ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/sbin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console Nothing ``` ### OS / Environment Arch Linux ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> Creates an example entry in "~/.ssh/config" ``` Host 192.168.0.1 Port 2222 ``` Run the following ad-hoc command ``` ansible all -i 192.168.0.1, -m setup ``` The output follows ``` 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` As you can notice the output shows "port 22" and not "port 2222". ### Expected Results Is expected the ansible try to connect to host using the parameters configured at ".ssh/config" file. ### Actual Results ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/USER/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed 192.168.0.1, inventory source with host_list plugin Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. META: ran handlers <192.168.0.1> ESTABLISH SSH CONNECTION FOR USER: None <192.168.0.1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s) <192.168.0.1> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) <192.168.0.1> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22) <192.168.0.1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) <192.168.0.1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) <192.168.0.1> SSH: Set ssh_common_args: () <192.168.0.1> SSH: Set ssh_extra_args: () <192.168.0.1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/USER/.ansible/cp/144d72725f) <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 1, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 0 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 2, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 1 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 3, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 3 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/USER/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/USER/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/USER/.ansible/cp/144d72725f\" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74524
https://github.com/ansible/ansible/pull/74526
d10100968890d85602099c153b71a23c416930b4
30912b6a47813940592bfcf7cb7d1d6e8d608da4
2021-04-30T20:39:16Z
python
2021-05-04T15:09:05Z
changelogs/fragments/ssh_port_default_fix.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,524
Ignoring ~/.ssh/config configuration file
### Summary After upgrade from "ansible-base(2.10.8)" to "ansible-core(2.11.0)" Ansible is ignoring "~/.ssh/config" configuration file. ### Issue Type Bug Report ### Component Name ansible-core ### Ansible Version ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/sbin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console Nothing ``` ### OS / Environment Arch Linux ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> Creates an example entry in "~/.ssh/config" ``` Host 192.168.0.1 Port 2222 ``` Run the following ad-hoc command ``` ansible all -i 192.168.0.1, -m setup ``` The output follows ``` 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` As you can notice the output shows "port 22" and not "port 2222". ### Expected Results Is expected the ansible try to connect to host using the parameters configured at ".ssh/config" file. ### Actual Results ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/USER/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed 192.168.0.1, inventory source with host_list plugin Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. META: ran handlers <192.168.0.1> ESTABLISH SSH CONNECTION FOR USER: None <192.168.0.1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s) <192.168.0.1> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) <192.168.0.1> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22) <192.168.0.1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) <192.168.0.1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) <192.168.0.1> SSH: Set ssh_common_args: () <192.168.0.1> SSH: Set ssh_extra_args: () <192.168.0.1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/USER/.ansible/cp/144d72725f) <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 1, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 0 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 2, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 1 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 3, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 3 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/USER/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/USER/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/USER/.ansible/cp/144d72725f\" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74524
https://github.com/ansible/ansible/pull/74526
d10100968890d85602099c153b71a23c416930b4
30912b6a47813940592bfcf7cb7d1d6e8d608da4
2021-04-30T20:39:16Z
python
2021-05-04T15:09:05Z
lib/ansible/plugins/connection/ssh.py
# Copyright (c) 2012, Michael DeHaan <[email protected]> # Copyright 2015 Abhijit Menon-Sen <[email protected]> # Copyright 2017 Toshio Kuratomi <[email protected]> # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = ''' name: ssh short_description: connect via ssh client binary description: - This connection plugin allows ansible to communicate to the target machines via normal ssh command line. - Ansible does not expose a channel to allow communication between the user and the ssh process to accept a password manually to decrypt an ssh key when using this connection plugin (which is the default). The use of ``ssh-agent`` is highly recommended. author: ansible (@core) extends_documentation_fragment: - connection_pipelining version_added: historical options: host: description: Hostname/ip to connect to. vars: - name: inventory_hostname - name: ansible_host - name: ansible_ssh_host - name: delegated_vars['ansible_host'] - name: delegated_vars['ansible_ssh_host'] host_key_checking: description: Determines if ssh should check host keys type: boolean ini: - section: defaults key: 'host_key_checking' - section: ssh_connection key: 'host_key_checking' version_added: '2.5' env: - name: ANSIBLE_HOST_KEY_CHECKING - name: ANSIBLE_SSH_HOST_KEY_CHECKING version_added: '2.5' vars: - name: ansible_host_key_checking version_added: '2.5' - name: ansible_ssh_host_key_checking version_added: '2.5' password: description: Authentication password for the C(remote_user). Can be supplied as CLI option. vars: - name: ansible_password - name: ansible_ssh_pass - name: ansible_ssh_password sshpass_prompt: description: Password prompt that sshpass should search for. Supported by sshpass 1.06 and up. default: '' ini: - section: 'ssh_connection' key: 'sshpass_prompt' env: - name: ANSIBLE_SSHPASS_PROMPT vars: - name: ansible_sshpass_prompt version_added: '2.10' ssh_args: description: Arguments to pass to all ssh cli tools default: '-C -o ControlMaster=auto -o ControlPersist=60s' ini: - section: 'ssh_connection' key: 'ssh_args' env: - name: ANSIBLE_SSH_ARGS vars: - name: ansible_ssh_args version_added: '2.7' cli: - name: ssh_args ssh_common_args: description: Common extra args for all ssh CLI tools ini: - section: 'ssh_connection' key: 'ssh_common_args' version_added: '2.7' env: - name: ANSIBLE_SSH_COMMON_ARGS version_added: '2.7' vars: - name: ansible_ssh_common_args cli: - name: ssh_common_args ssh_executable: default: ssh description: - This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH. - This option is usually not required, it might be useful when access to system ssh is restricted, or when using ssh wrappers to connect to remote hosts. env: [{name: ANSIBLE_SSH_EXECUTABLE}] ini: - {key: ssh_executable, section: ssh_connection} #const: ANSIBLE_SSH_EXECUTABLE version_added: "2.2" vars: - name: ansible_ssh_executable version_added: '2.7' sftp_executable: default: sftp description: - This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH. env: [{name: ANSIBLE_SFTP_EXECUTABLE}] ini: - {key: sftp_executable, section: ssh_connection} version_added: "2.6" vars: - name: ansible_sftp_executable version_added: '2.7' scp_executable: default: scp description: - This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH. env: [{name: ANSIBLE_SCP_EXECUTABLE}] ini: - {key: scp_executable, section: ssh_connection} version_added: "2.6" vars: - name: ansible_scp_executable version_added: '2.7' scp_extra_args: description: Extra exclusive to the ``scp`` CLI vars: - name: ansible_scp_extra_args env: - name: ANSIBLE_SCP_EXTRA_ARGS version_added: '2.7' ini: - key: scp_extra_args section: ssh_connection version_added: '2.7' cli: - name: scp_extra_args sftp_extra_args: description: Extra exclusive to the ``sftp`` CLI vars: - name: ansible_sftp_extra_args env: - name: ANSIBLE_SFTP_EXTRA_ARGS version_added: '2.7' ini: - key: sftp_extra_args section: ssh_connection version_added: '2.7' cli: - name: sftp_extra_args ssh_extra_args: description: Extra exclusive to the 'ssh' CLI vars: - name: ansible_ssh_extra_args env: - name: ANSIBLE_SSH_EXTRA_ARGS version_added: '2.7' ini: - key: ssh_extra_args section: ssh_connection version_added: '2.7' cli: - name: ssh_extra_args retries: description: Number of attempts to connect. default: 3 type: integer env: - name: ANSIBLE_SSH_RETRIES ini: - section: connection key: retries - section: ssh_connection key: retries vars: - name: ansible_ssh_retries version_added: '2.7' port: description: Remote port to connect to. type: int default: 22 ini: - section: defaults key: remote_port env: - name: ANSIBLE_REMOTE_PORT vars: - name: ansible_port - name: ansible_ssh_port remote_user: description: - User name with which to login to the remote server, normally set by the remote_user keyword. - If no user is supplied, Ansible will let the ssh client binary choose the user as it normally ini: - section: defaults key: remote_user env: - name: ANSIBLE_REMOTE_USER vars: - name: ansible_user - name: ansible_ssh_user cli: - name: user pipelining: env: - name: ANSIBLE_PIPELINING - name: ANSIBLE_SSH_PIPELINING ini: - section: connection key: pipelining - section: ssh_connection key: pipelining vars: - name: ansible_pipelining - name: ansible_ssh_pipelining private_key_file: description: - Path to private key file to use for authentication ini: - section: defaults key: private_key_file env: - name: ANSIBLE_PRIVATE_KEY_FILE vars: - name: ansible_private_key_file - name: ansible_ssh_private_key_file cli: - name: private_key_file control_path: description: - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. - Since 2.3, if null (default), ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. - Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`. - Be aware that this setting is ignored if `-o ControlPath` is set in ssh args. env: - name: ANSIBLE_SSH_CONTROL_PATH ini: - key: control_path section: ssh_connection vars: - name: ansible_control_path version_added: '2.7' control_path_dir: default: ~/.ansible/cp description: - This sets the directory to use for ssh control path if the control path setting is null. - Also, provides the `%(directory)s` variable for the control path setting. env: - name: ANSIBLE_SSH_CONTROL_PATH_DIR ini: - section: ssh_connection key: control_path_dir vars: - name: ansible_control_path_dir version_added: '2.7' sftp_batch_mode: default: 'yes' description: 'TODO: write it' env: [{name: ANSIBLE_SFTP_BATCH_MODE}] ini: - {key: sftp_batch_mode, section: ssh_connection} type: bool vars: - name: ansible_sftp_batch_mode version_added: '2.7' ssh_transfer_method: default: smart description: - "Preferred method to use when transferring files over ssh" - Setting to 'smart' (default) will try them in order, until one succeeds or they all fail - Using 'piped' creates an ssh pipe with ``dd`` on either side to copy the data choices: ['sftp', 'scp', 'piped', 'smart'] env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}] ini: - {key: transfer_method, section: ssh_connection} scp_if_ssh: default: smart description: - "Preferred method to use when transfering files over ssh" - When set to smart, Ansible will try them until one succeeds or they all fail - If set to True, it will force 'scp', if False it will use 'sftp' env: [{name: ANSIBLE_SCP_IF_SSH}] ini: - {key: scp_if_ssh, section: ssh_connection} vars: - name: ansible_scp_if_ssh version_added: '2.7' use_tty: version_added: '2.5' default: 'yes' description: add -tt to ssh commands to force tty allocation env: [{name: ANSIBLE_SSH_USETTY}] ini: - {key: usetty, section: ssh_connection} type: bool vars: - name: ansible_ssh_use_tty version_added: '2.7' timeout: default: 10 description: - This is the default ammount of time we will wait while establishing an ssh connection - It also controls how long we can wait to access reading the connection once established (select on the socket) env: - name: ANSIBLE_TIMEOUT - name: ANSIBLE_SSH_TIMEOUT version_added: '2.11' ini: - key: timeout section: defaults - key: timeout section: ssh_connection version_added: '2.11' vars: - name: ansible_ssh_timeout version_added: '2.11' cli: - name: timeout type: integer ''' import errno import fcntl import hashlib import os import pty import re import subprocess import time from functools import wraps from ansible import constants as C from ansible.errors import ( AnsibleAuthenticationFailure, AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound, ) from ansible.errors import AnsibleOptionsError from ansible.module_utils.compat import selectors from ansible.module_utils.six import PY3, text_type, binary_type from ansible.module_utils.six.moves import shlex_quote from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean from ansible.plugins.connection import ConnectionBase, BUFSIZE from ansible.plugins.shell.powershell import _parse_clixml from ansible.utils.display import Display from ansible.utils.path import unfrackpath, makedirs_safe display = Display() b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception # while invoking a script via -m b'PHP Parse error:', # Php always returns error 255 ) SSHPASS_AVAILABLE = None class AnsibleControlPersistBrokenPipeError(AnsibleError): ''' ControlPersist broken pipe ''' pass def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display): # sshpass errors if command == b'sshpass': # Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account. if return_tuple[0] == 5: msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries) if remaining_retries <= 0: msg = 'Invalid/incorrect password:' if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip()) raise AnsibleAuthenticationFailure(msg) # sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios. # No exception is raised, so the connection is retried - except when attempting to use # sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly. elif return_tuple[0] in [1, 2, 3, 4, 6]: msg = 'sshpass error:' if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: details = to_native(return_tuple[2]).rstrip() if "sshpass: invalid option -- 'P'" in details: details = 'Installed sshpass version does not support customized password prompts. ' \ 'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.' raise AnsibleError('{0} {1}'.format(msg, details)) msg = '{0} {1}'.format(msg, details) if return_tuple[0] == 255: SSH_ERROR = True for signature in b_NOT_SSH_ERRORS: if signature in return_tuple[1]: SSH_ERROR = False break if SSH_ERROR: msg = "Failed to connect to the host via ssh:" if no_log: msg = '{0} <error censored due to no log>'.format(msg) else: msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip()) raise AnsibleConnectionFailure(msg) # For other errors, no exception is raised so the connection is retried and we only log the messages if 1 <= return_tuple[0] <= 254: msg = u"Failed to connect to the host via ssh:" if no_log: msg = u'{0} <error censored due to no log>'.format(msg) else: msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip()) display.vvv(msg, host=host) def _ssh_retry(func): """ Decorator to retry ssh/scp/sftp in the case of a connection failure Will retry if: * an exception is caught * ssh returns 255 Will not retry if * sshpass returns 5 (invalid password, to prevent account lockouts) * remaining_tries is < 2 * retries limit reached """ @wraps(func) def wrapped(self, *args, **kwargs): remaining_tries = int(self.get_option('retries')) + 1 cmd_summary = u"%s..." % to_text(args[0]) conn_password = self.get_option('password') or self._play_context.password for attempt in range(remaining_tries): cmd = args[0] if attempt != 0 and conn_password and isinstance(cmd, list): # If this is a retry, the fd/pipe for sshpass is closed, and we need a new one self.sshpass_pipe = os.pipe() cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict') try: try: return_tuple = func(self, *args, **kwargs) # TODO: this should come from task if self._play_context.no_log: display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host) else: display.vvv(return_tuple, host=self.host) # 0 = success # 1-254 = remote command return code # 255 could be a failure from the ssh command itself except (AnsibleControlPersistBrokenPipeError): # Retry one more time because of the ControlPersist broken pipe (see #16731) cmd = args[0] if conn_password and isinstance(cmd, list): # This is a retry, so the fd/pipe for sshpass is closed, and we need a new one self.sshpass_pipe = os.pipe() cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict') display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE") return_tuple = func(self, *args, **kwargs) remaining_retries = remaining_tries - attempt - 1 _handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host) break # 5 = Invalid/incorrect password from sshpass except AnsibleAuthenticationFailure: # Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries raise except (AnsibleConnectionFailure, Exception) as e: if attempt == remaining_tries - 1: raise else: pause = 2 ** attempt - 1 if pause > 30: pause = 30 if isinstance(e, AnsibleConnectionFailure): msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause) else: msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), " u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause)) display.vv(msg, host=self.host) time.sleep(pause) continue return return_tuple return wrapped class Connection(ConnectionBase): ''' ssh based connections ''' transport = 'ssh' has_pipelining = True def __init__(self, *args, **kwargs): super(Connection, self).__init__(*args, **kwargs) # TODO: all should come from get_option(), but not might be set at this point yet self.host = self._play_context.remote_addr self.port = self._play_context.port self.user = self._play_context.remote_user self.control_path = None self.control_path_dir = None # Windows operates differently from a POSIX connection/shell plugin, # we need to set various properties to ensure SSH on Windows continues # to work if getattr(self._shell, "_IS_WINDOWS", False): self.has_native_async = True self.always_pipeline_modules = True self.module_implementation_preferences = ('.ps1', '.exe', '') self.allow_executable = False # The connection is created by running ssh/scp/sftp from the exec_command, # put_file, and fetch_file methods, so we don't need to do any connection # management here. def _connect(self): return self @staticmethod def _create_control_path(host, port, user, connection=None, pid=None): '''Make a hash for the controlpath based on con attributes''' pstring = '%s-%s-%s' % (host, port, user) if connection: pstring += '-%s' % connection if pid: pstring += '-%s' % to_text(pid) m = hashlib.sha1() m.update(to_bytes(pstring)) digest = m.hexdigest() cpath = '%(directory)s/' + digest[:10] return cpath @staticmethod def _sshpass_available(): global SSHPASS_AVAILABLE # We test once if sshpass is available, and remember the result. It # would be nice to use distutils.spawn.find_executable for this, but # distutils isn't always available; shutils.which() is Python3-only. if SSHPASS_AVAILABLE is None: try: p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p.communicate() SSHPASS_AVAILABLE = True except OSError: SSHPASS_AVAILABLE = False return SSHPASS_AVAILABLE @staticmethod def _persistence_controls(b_command): ''' Takes a command array and scans it for ControlPersist and ControlPath settings and returns two booleans indicating whether either was found. This could be smarter, e.g. returning false if ControlPersist is 'no', but for now we do it simple way. ''' controlpersist = False controlpath = False for b_arg in (a.lower() for a in b_command): if b'controlpersist' in b_arg: controlpersist = True elif b'controlpath' in b_arg: controlpath = True return controlpersist, controlpath def _add_args(self, b_command, b_args, explanation): """ Adds arguments to the ssh command and displays a caller-supplied explanation of why. :arg b_command: A list containing the command to add the new arguments to. This list will be modified by this method. :arg b_args: An iterable of new arguments to add. This iterable is used more than once so it must be persistent (ie: a list is okay but a StringIO would not) :arg explanation: A text string containing explaining why the arguments were added. It will be displayed with a high enough verbosity. .. note:: This function does its work via side-effect. The b_command list has the new arguments appended. """ display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr) b_command += b_args def _build_command(self, binary, subsystem, *other_args): ''' Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command wrapped in local ssh shell commands and ready for execution. :arg binary: actual executable to use to execute command. :arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names. :arg other_args: dict of, value pairs passed as arguments to the ssh binary ''' b_command = [] conn_password = self.get_option('password') or self._play_context.password # # First, the command to invoke # # If we want to use password authentication, we have to set up a pipe to # write the password to sshpass. if conn_password: if not self._sshpass_available(): raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program") self.sshpass_pipe = os.pipe() b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')] password_prompt = self.get_option('sshpass_prompt') if password_prompt: b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')] b_command += [to_bytes(binary, errors='surrogate_or_strict')] # # Next, additional arguments based on the configuration. # # sftp batch mode allows us to correctly catch failed transfers, but can # be disabled if the client side doesn't support the option. However, # sftp batch mode does not prompt for passwords so it must be disabled # if not using controlpersist and using sshpass if subsystem == 'sftp' and self.get_option('sftp_batch_mode'): if conn_password: b_args = [b'-o', b'BatchMode=no'] self._add_args(b_command, b_args, u'disable batch mode for sshpass') b_command += [b'-b', b'-'] if self._play_context.verbosity > 3: b_command.append(b'-vvv') # Next, we add ssh_args ssh_args = self.get_option('ssh_args') if ssh_args: b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(ssh_args)] self._add_args(b_command, b_args, u"ansible.cfg set ssh_args") # Now we add various arguments that have their own specific settings defined in docs above. if not self.get_option('host_key_checking'): b_args = (b"-o", b"StrictHostKeyChecking=no") self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled") self.port = self.get_option('port') if self.port is not None: b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict')) self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set") key = self.get_option('private_key_file') if key: b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"') self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set") if not conn_password: self._add_args( b_command, ( b"-o", b"KbdInteractiveAuthentication=no", b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey", b"-o", b"PasswordAuthentication=no" ), u"ansible_password/ansible_ssh_password not set" ) self.user = self.get_option('remote_user') if self.user: self._add_args( b_command, (b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')), u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set" ) timeout = self.get_option('timeout') self._add_args( b_command, (b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')), u"ANSIBLE_TIMEOUT/timeout set" ) # Add in any common or binary-specific arguments from the PlayContext # (i.e. inventory or task settings or overrides on the command line). for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)): attr = self.get_option(opt) if attr is not None: b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)] self._add_args(b_command, b_args, u"Set %s" % opt) # Check if ControlPersist is enabled and add a ControlPath if one hasn't # already been set. controlpersist, controlpath = self._persistence_controls(b_command) if controlpersist: self._persistent = True if not controlpath: self.control_path_dir = self.get_option('control_path_dir') cpdir = unfrackpath(self.control_path_dir) b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict') # The directory must exist and be writable. makedirs_safe(b_cpdir, 0o700) if not os.access(b_cpdir, os.W_OK): raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir)) self.control_path = self.get_option('control_path') if not self.control_path: self.control_path = self._create_control_path( self.host, self.port, self.user ) b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict')) self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath") # Finally, we add any caller-supplied extras. if other_args: b_command += [to_bytes(a) for a in other_args] return b_command def _send_initial_data(self, fh, in_data, ssh_process): ''' Writes initial data to the stdin filehandle of the subprocess and closes it. (The handle must be closed; otherwise, for example, "sftp -b -" will just hang forever waiting for more commands.) ''' display.debug(u'Sending initial data') try: fh.write(to_bytes(in_data)) fh.close() except (OSError, IOError) as e: # The ssh connection may have already terminated at this point, with a more useful error # Only raise AnsibleConnectionFailure if the ssh process is still alive time.sleep(0.001) ssh_process.poll() if getattr(ssh_process, 'returncode', None) is None: raise AnsibleConnectionFailure( 'Data could not be sent to remote host "%s". Make sure this host can be reached ' 'over ssh: %s' % (self.host, to_native(e)), orig_exc=e ) display.debug(u'Sent initial data (%d bytes)' % len(in_data)) # Used by _run() to kill processes on failures @staticmethod def _terminate_process(p): """ Terminate a process, ignoring errors """ try: p.terminate() except (OSError, IOError): pass # This is separate from _run() because we need to do the same thing for stdout # and stderr. def _examine_output(self, source, state, b_chunk, sudoable): ''' Takes a string, extracts complete lines from it, tests to see if they are a prompt, error message, etc., and sets appropriate flags in self. Prompt and success lines are removed. Returns the processed (i.e. possibly-edited) output and the unprocessed remainder (to be processed with the next chunk) as strings. ''' output = [] for b_line in b_chunk.splitlines(True): display_line = to_text(b_line).rstrip('\r\n') suppress_output = False # display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line)) if self.become.expect_prompt() and self.become.check_password_prompt(b_line): display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_prompt'] = True suppress_output = True elif self.become.success and self.become.check_success(b_line): display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_success'] = True suppress_output = True elif sudoable and self.become.check_incorrect_password(b_line): display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_error'] = True elif sudoable and self.become.check_missing_password(b_line): display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line)) self._flags['become_nopasswd_error'] = True if not suppress_output: output.append(b_line) # The chunk we read was most likely a series of complete lines, but just # in case the last line was incomplete (and not a prompt, which we would # have removed from the output), we retain it to be processed with the # next chunk. remainder = b'' if output and not output[-1].endswith(b'\n'): remainder = output[-1] output = output[:-1] return b''.join(output), remainder def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True): ''' Starts the command and communicates with it until it ends. ''' # We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen display_cmd = u' '.join(shlex_quote(to_text(c)) for c in cmd) display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host) # Start the given command. If we don't need to pipeline data, we can try # to use a pseudo-tty (ssh will have been invoked with -tt). If we are # pipelining data, or can't create a pty, we fall back to using plain # old pipes. p = None if isinstance(cmd, (text_type, binary_type)): cmd = to_bytes(cmd) else: cmd = list(map(to_bytes, cmd)) conn_password = self.get_option('password') or self._play_context.password if not in_data: try: # Make sure stdin is a proper pty to avoid tcgetattr errors master, slave = pty.openpty() if PY3 and conn_password: # pylint: disable=unexpected-keyword-arg p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe) else: p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdin = os.fdopen(master, 'wb', 0) os.close(slave) except (OSError, IOError): p = None if not p: try: if PY3 and conn_password: # pylint: disable=unexpected-keyword-arg p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe) else: p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdin = p.stdin except (OSError, IOError) as e: raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e)) # If we are using SSH password authentication, write the password into # the pipe we opened in _build_command. if conn_password: os.close(self.sshpass_pipe[0]) try: os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n') except OSError as e: # Ignore broken pipe errors if the sshpass process has exited. if e.errno != errno.EPIPE or p.poll() is None: raise os.close(self.sshpass_pipe[1]) # # SSH state machine # # Now we read and accumulate output from the running process until it # exits. Depending on the circumstances, we may also need to write an # escalation password and/or pipelined input to the process. states = [ 'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit' ] # Are we requesting privilege escalation? Right now, we may be invoked # to execute sftp/scp with sudoable=True, but we can request escalation # only when using ssh. Otherwise we can send initial data straightaway. state = states.index('ready_to_send') if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable: prompt = getattr(self.become, 'prompt', None) if prompt: # We're requesting escalation with a password, so we have to # wait for a password prompt. state = states.index('awaiting_prompt') display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt))) elif self.become and self.become.success: # We're requesting escalation without a password, so we have to # detect success/failure before sending any initial data. state = states.index('awaiting_escalation') display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success))) # We store accumulated stdout and stderr output from the process here, # but strip any privilege escalation prompt/confirmation lines first. # Output is accumulated into tmp_*, complete lines are extracted into # an array, then checked and removed or copied to stdout or stderr. We # set any flags based on examining the output in self._flags. b_stdout = b_stderr = b'' b_tmp_stdout = b_tmp_stderr = b'' self._flags = dict( become_prompt=False, become_success=False, become_error=False, become_nopasswd_error=False ) # select timeout should be longer than the connect timeout, otherwise # they will race each other when we can't connect, and the connect # timeout usually fails timeout = 2 + self.get_option('timeout') for fd in (p.stdout, p.stderr): fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK) # TODO: bcoca would like to use SelectSelector() when open # select is faster when filehandles is low and we only ever handle 1. selector = selectors.DefaultSelector() selector.register(p.stdout, selectors.EVENT_READ) selector.register(p.stderr, selectors.EVENT_READ) # If we can send initial data without waiting for anything, we do so # before we start polling if states[state] == 'ready_to_send' and in_data: self._send_initial_data(stdin, in_data, p) state += 1 try: while True: poll = p.poll() events = selector.select(timeout) # We pay attention to timeouts only while negotiating a prompt. if not events: # We timed out if state <= states.index('awaiting_escalation'): # If the process has already exited, then it's not really a # timeout; we'll let the normal error handling deal with it. if poll is not None: break self._terminate_process(p) raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout))) # Read whatever output is available on stdout and stderr, and stop # listening to the pipe if it's been closed. for key, event in events: if key.fileobj == p.stdout: b_chunk = p.stdout.read() if b_chunk == b'': # stdout has been closed, stop watching it selector.unregister(p.stdout) # When ssh has ControlMaster (+ControlPath/Persist) enabled, the # first connection goes into the background and we never see EOF # on stderr. If we see EOF on stdout, lower the select timeout # to reduce the time wasted selecting on stderr if we observe # that the process has not yet existed after this EOF. Otherwise # we may spend a long timeout period waiting for an EOF that is # not going to arrive until the persisted connection closes. timeout = 1 b_tmp_stdout += b_chunk display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk))) elif key.fileobj == p.stderr: b_chunk = p.stderr.read() if b_chunk == b'': # stderr has been closed, stop watching it selector.unregister(p.stderr) b_tmp_stderr += b_chunk display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk))) # We examine the output line-by-line until we have negotiated any # privilege escalation prompt and subsequent success/error message. # Afterwards, we can accumulate output without looking at it. if state < states.index('ready_to_send'): if b_tmp_stdout: b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable) b_stdout += b_output b_tmp_stdout = b_unprocessed if b_tmp_stderr: b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable) b_stderr += b_output b_tmp_stderr = b_unprocessed else: b_stdout += b_tmp_stdout b_stderr += b_tmp_stderr b_tmp_stdout = b_tmp_stderr = b'' # If we see a privilege escalation prompt, we send the password. # (If we're expecting a prompt but the escalation succeeds, we # didn't need the password and can carry on regardless.) if states[state] == 'awaiting_prompt': if self._flags['become_prompt']: display.debug(u'Sending become_password in response to prompt') become_pass = self.become.get_option('become_pass', playcontext=self._play_context) stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n') # On python3 stdin is a BufferedWriter, and we don't have a guarantee # that the write will happen without a flush stdin.flush() self._flags['become_prompt'] = False state += 1 elif self._flags['become_success']: state += 1 # We've requested escalation (with or without a password), now we # wait for an error message or a successful escalation. if states[state] == 'awaiting_escalation': if self._flags['become_success']: display.vvv(u'Escalation succeeded') self._flags['become_success'] = False state += 1 elif self._flags['become_error']: display.vvv(u'Escalation failed') self._terminate_process(p) self._flags['become_error'] = False raise AnsibleError('Incorrect %s password' % self.become.name) elif self._flags['become_nopasswd_error']: display.vvv(u'Escalation requires password') self._terminate_process(p) self._flags['become_nopasswd_error'] = False raise AnsibleError('Missing %s password' % self.become.name) elif self._flags['become_prompt']: # This shouldn't happen, because we should see the "Sorry, # try again" message first. display.vvv(u'Escalation prompt repeated') self._terminate_process(p) self._flags['become_prompt'] = False raise AnsibleError('Incorrect %s password' % self.become.name) # Once we're sure that the privilege escalation prompt, if any, has # been dealt with, we can send any initial data and start waiting # for output. if states[state] == 'ready_to_send': if in_data: self._send_initial_data(stdin, in_data, p) state += 1 # Now we're awaiting_exit: has the child process exited? If it has, # and we've read all available output from it, we're done. if poll is not None: if not selector.get_map() or not events: break # We should not see further writes to the stdout/stderr file # descriptors after the process has closed, set the select # timeout to gather any last writes we may have missed. timeout = 0 continue # If the process has not yet exited, but we've already read EOF from # its stdout and stderr (and thus no longer watching any file # descriptors), we can just wait for it to exit. elif not selector.get_map(): p.wait() break # Otherwise there may still be outstanding data to read. finally: selector.close() # close stdin, stdout, and stderr after process is terminated and # stdout/stderr are read completely (see also issues #848, #64768). stdin.close() p.stdout.close() p.stderr.close() if self.get_option('host_key_checking'): if cmd[0] == b"sshpass" and p.returncode == 6: raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support ' 'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.') controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr if p.returncode != 0 and controlpersisterror: raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" ' '(or ssh_args in [ssh_connection] section of the config file) before running again') # If we find a broken pipe because of ControlPersist timeout expiring (see #16731), # we raise a special exception so that we can retry a connection. controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr if p.returncode == 255: additional = to_native(b_stderr) if controlpersist_broken_pipe: raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional) elif in_data and checkrc: raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s' % (self.host, additional)) return (p.returncode, b_stdout, b_stderr) @_ssh_retry def _run(self, cmd, in_data, sudoable=True, checkrc=True): """Wrapper around _bare_run that retries the connection """ return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc) @_ssh_retry def _file_transport_command(self, in_path, out_path, sftp_action): # scp and sftp require square brackets for IPv6 addresses, but # accept them for hostnames and IPv4 addresses too. host = '[%s]' % self.host smart_methods = ['sftp', 'scp', 'piped'] # Windows does not support dd so we cannot use the piped method if getattr(self._shell, "_IS_WINDOWS", False): smart_methods.remove('piped') # Transfer methods to try methods = [] # Use the transfer_method option if set, otherwise use scp_if_ssh ssh_transfer_method = self.get_option('ssh_transfer_method') if ssh_transfer_method is not None: if ssh_transfer_method == 'smart': methods = smart_methods else: methods = [ssh_transfer_method] else: # since this can be a non-bool now, we need to handle it correctly scp_if_ssh = self.get_option('scp_if_ssh') if not isinstance(scp_if_ssh, bool): scp_if_ssh = scp_if_ssh.lower() if scp_if_ssh in BOOLEANS: scp_if_ssh = boolean(scp_if_ssh, strict=False) elif scp_if_ssh != 'smart': raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]') if scp_if_ssh == 'smart': methods = smart_methods elif scp_if_ssh is True: methods = ['scp'] else: methods = ['sftp'] for method in methods: returncode = stdout = stderr = None if method == 'sftp': cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host)) in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path)) in_data = to_bytes(in_data, nonstring='passthru') (returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False) elif method == 'scp': scp = self.get_option('scp_executable') if sftp_action == 'get': cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path) else: cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path))) in_data = None (returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False) elif method == 'piped': if sftp_action == 'get': # we pass sudoable=False to disable pty allocation, which # would end up mixing stdout/stderr and screwing with newlines (returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False) with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file: out_file.write(stdout) else: with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f: in_data = to_bytes(f.read(), nonstring='passthru') if not in_data: count = ' count=0' else: count = '' (returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False) # Check the return code and rollover to next method if failed if returncode == 0: return (returncode, stdout, stderr) else: # If not in smart mode, the data will be printed by the raise below if len(methods) > 1: display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host)) display.debug(u'%s' % to_text(stdout)) display.debug(u'%s' % to_text(stderr)) if returncode == 255: raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr))) else: raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" % (to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr))) def _escape_win_path(self, path): """ converts a Windows path to one that's supported by SFTP and SCP """ # If using a root path then we need to start with / prefix = "" if re.match(r'^\w{1}:', path): prefix = "/" # Convert all '\' to '/' return "%s%s" % (prefix, path.replace("\\", "/")) # # Main public methods # def exec_command(self, cmd, in_data=None, sudoable=True): ''' run a command on the remote host ''' super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable) display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self._play_context.remote_addr) if getattr(self._shell, "_IS_WINDOWS", False): # Become method 'runas' is done in the wrapper that is executed, # need to disable sudoable so the bare_run is not waiting for a # prompt that will not occur sudoable = False # Make sure our first command is to set the console encoding to # utf-8, this must be done via chcp to get utf-8 (65001) cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND] cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)) cmd = ' '.join(cmd_parts) # we can only use tty when we are not pipelining the modules. piping # data into /usr/bin/python inside a tty automatically invokes the # python interactive-mode but the modules are not compatible with the # interactive-mode ("unexpected indent" mainly because of empty lines) ssh_executable = self.get_option('ssh_executable') # -tt can cause various issues in some environments so allow the user # to disable it as a troubleshooting method. use_tty = self.get_option('use_tty') if not in_data and sudoable and use_tty: args = ('-tt', self.host, cmd) else: args = (self.host, cmd) cmd = self._build_command(ssh_executable, 'ssh', *args) (returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable) # When running on Windows, stderr may contain CLIXML encoded output if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"): stderr = _parse_clixml(stderr) return (returncode, stdout, stderr) def put_file(self, in_path, out_path): ''' transfer a file from local to remote ''' super(Connection, self).put_file(in_path, out_path) display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host) if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')): raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path))) if getattr(self._shell, "_IS_WINDOWS", False): out_path = self._escape_win_path(out_path) return self._file_transport_command(in_path, out_path, 'put') def fetch_file(self, in_path, out_path): ''' fetch a file from remote to local ''' super(Connection, self).fetch_file(in_path, out_path) display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host) # need to add / if path is rooted if getattr(self._shell, "_IS_WINDOWS", False): in_path = self._escape_win_path(in_path) return self._file_transport_command(in_path, out_path, 'get') def reset(self): run_reset = False # If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening. # only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set # 'check' will determine this. cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host) display.vvv(u'sending connection check: %s' % to_text(cmd)) p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() status_code = p.wait() if status_code != 0: display.vvv(u"No connection to reset: %s" % to_text(stderr)) else: run_reset = True if run_reset: cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host) display.vvv(u'sending connection stop: %s' % to_text(cmd)) p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() status_code = p.wait() if status_code != 0: display.warning(u"Failed to reset connection:%s" % to_text(stderr)) self.close() def close(self): self._connected = False
closed
ansible/ansible
https://github.com/ansible/ansible
74,524
Ignoring ~/.ssh/config configuration file
### Summary After upgrade from "ansible-base(2.10.8)" to "ansible-core(2.11.0)" Ansible is ignoring "~/.ssh/config" configuration file. ### Issue Type Bug Report ### Component Name ansible-core ### Ansible Version ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/sbin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console Nothing ``` ### OS / Environment Arch Linux ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> Creates an example entry in "~/.ssh/config" ``` Host 192.168.0.1 Port 2222 ``` Run the following ad-hoc command ``` ansible all -i 192.168.0.1, -m setup ``` The output follows ``` 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` As you can notice the output shows "port 22" and not "port 2222". ### Expected Results Is expected the ansible try to connect to host using the parameters configured at ".ssh/config" file. ### Actual Results ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/USER/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed 192.168.0.1, inventory source with host_list plugin Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. META: ran handlers <192.168.0.1> ESTABLISH SSH CONNECTION FOR USER: None <192.168.0.1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s) <192.168.0.1> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) <192.168.0.1> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22) <192.168.0.1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) <192.168.0.1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) <192.168.0.1> SSH: Set ssh_common_args: () <192.168.0.1> SSH: Set ssh_extra_args: () <192.168.0.1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/USER/.ansible/cp/144d72725f) <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 1, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 0 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 2, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 1 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 3, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 3 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/USER/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/USER/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/USER/.ansible/cp/144d72725f\" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74524
https://github.com/ansible/ansible/pull/74526
d10100968890d85602099c153b71a23c416930b4
30912b6a47813940592bfcf7cb7d1d6e8d608da4
2021-04-30T20:39:16Z
python
2021-05-04T15:09:05Z
test/integration/targets/connection_ssh/check_ssh_defaults.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,524
Ignoring ~/.ssh/config configuration file
### Summary After upgrade from "ansible-base(2.10.8)" to "ansible-core(2.11.0)" Ansible is ignoring "~/.ssh/config" configuration file. ### Issue Type Bug Report ### Component Name ansible-core ### Ansible Version ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/sbin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console Nothing ``` ### OS / Environment Arch Linux ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> Creates an example entry in "~/.ssh/config" ``` Host 192.168.0.1 Port 2222 ``` Run the following ad-hoc command ``` ansible all -i 192.168.0.1, -m setup ``` The output follows ``` 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` As you can notice the output shows "port 22" and not "port 2222". ### Expected Results Is expected the ansible try to connect to host using the parameters configured at ".ssh/config" file. ### Actual Results ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/USER/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed 192.168.0.1, inventory source with host_list plugin Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. META: ran handlers <192.168.0.1> ESTABLISH SSH CONNECTION FOR USER: None <192.168.0.1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s) <192.168.0.1> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) <192.168.0.1> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22) <192.168.0.1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) <192.168.0.1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) <192.168.0.1> SSH: Set ssh_common_args: () <192.168.0.1> SSH: Set ssh_extra_args: () <192.168.0.1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/USER/.ansible/cp/144d72725f) <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 1, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 0 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 2, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 1 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 3, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 3 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/USER/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/USER/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/USER/.ansible/cp/144d72725f\" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74524
https://github.com/ansible/ansible/pull/74526
d10100968890d85602099c153b71a23c416930b4
30912b6a47813940592bfcf7cb7d1d6e8d608da4
2021-04-30T20:39:16Z
python
2021-05-04T15:09:05Z
test/integration/targets/connection_ssh/files/port_overrride_ssh.cfg
closed
ansible/ansible
https://github.com/ansible/ansible
74,524
Ignoring ~/.ssh/config configuration file
### Summary After upgrade from "ansible-base(2.10.8)" to "ansible-core(2.11.0)" Ansible is ignoring "~/.ssh/config" configuration file. ### Issue Type Bug Report ### Component Name ansible-core ### Ansible Version ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/sbin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console Nothing ``` ### OS / Environment Arch Linux ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> Creates an example entry in "~/.ssh/config" ``` Host 192.168.0.1 Port 2222 ``` Run the following ad-hoc command ``` ansible all -i 192.168.0.1, -m setup ``` The output follows ``` 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` As you can notice the output shows "port 22" and not "port 2222". ### Expected Results Is expected the ansible try to connect to host using the parameters configured at ".ssh/config" file. ### Actual Results ```console ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /home/USER/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] jinja version = 2.11.3 libyaml = True Using /etc/ansible/ansible.cfg as config file setting up inventory plugins Parsed 192.168.0.1, inventory source with host_list plugin Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py Attempting to use 'default' callback. Skipping callback 'default', as we already have a stdout callback. Attempting to use 'junit' callback. Attempting to use 'minimal' callback. Skipping callback 'minimal', as we already have a stdout callback. Attempting to use 'oneline' callback. Skipping callback 'oneline', as we already have a stdout callback. Attempting to use 'tree' callback. META: ran handlers <192.168.0.1> ESTABLISH SSH CONNECTION FOR USER: None <192.168.0.1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s) <192.168.0.1> SSH: ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled: (-o)(StrictHostKeyChecking=no) <192.168.0.1> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22) <192.168.0.1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no) <192.168.0.1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10) <192.168.0.1> SSH: Set ssh_common_args: () <192.168.0.1> SSH: Set ssh_extra_args: () <192.168.0.1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/USER/.ansible/cp/144d72725f) <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 1, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 0 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 2, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 1 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') <192.168.0.1> ssh_retry: attempt: 3, ssh return code is 255. cmd ([b'ssh', b'-vvv', b'-C', b'-o', b'ControlMaster=auto', b'-o', b'ControlPersist=60s', b'-o', b'StrictHostKeyChecking=no', b'-o', b'Port=22', b'-o', b'KbdInteractiveAuthentication=no', b'-o', b'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', b'-o', b'PasswordAuthentication=no', b'-o', b'ConnectTimeout=10', b'-o', b'ControlPath=/home/USER/.ansible/cp/144d72725f', b'192.168.0.1', b"/bin/sh -c 'echo ~ && sleep 0'"]...), pausing for 3 seconds <192.168.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/USER/.ansible/cp/144d72725f 192.168.0.1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <192.168.0.1> (255, b'', b'OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/USER/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/USER/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/USER/.ansible/cp/144d72725f" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused\r\n') 192.168.0.1 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.6p1, OpenSSL 1.1.1k 25 Mar 2021\r\ndebug1: Reading configuration data /home/USER/.ssh/config\r\ndebug1: /home/USER/.ssh/config line 8: Applying options for 192.168.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 192.168.0.1 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/USER/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/USER/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/USER/.ansible/cp/144d72725f\" does not exist\r\ndebug3: ssh_connect_direct: entering\r\ndebug1: Connecting to 192.168.0.1 [192.168.0.1] port 22.\r\ndebug3: set_sock_tos: set socket 3 IP_TOS 0x48\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 192.168.0.1 port 22: Connection refused\r\nssh: connect to host 192.168.0.1 port 22: Connection refused", "unreachable": true } ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74524
https://github.com/ansible/ansible/pull/74526
d10100968890d85602099c153b71a23c416930b4
30912b6a47813940592bfcf7cb7d1d6e8d608da4
2021-04-30T20:39:16Z
python
2021-05-04T15:09:05Z
test/integration/targets/connection_ssh/runme.sh
#!/usr/bin/env bash set -ux # We skip this whole section if the test node doesn't have sshpass on it. if command -v sshpass > /dev/null; then # Check if our sshpass supports -P sshpass -P foo > /dev/null sshpass_supports_prompt=$? if [[ $sshpass_supports_prompt -eq 0 ]]; then # If the prompt is wrong, we'll end up hanging (due to sshpass hanging). # We should probably do something better here, like timing out in Ansible, # but this has been the behavior for a long time, before we supported custom # password prompts. # # So we search for a custom password prompt that is clearly wrong and call # ansible with timeout. If we time out, our custom prompt was successfully # searched for. It's a weird way of doing things, but it does ensure # that the flag gets passed to sshpass. timeout 5 ansible -m ping \ -e ansible_connection=ssh \ -e ansible_sshpass_prompt=notThis: \ -e ansible_password=foo \ -e ansible_user=definitelynotroot \ -i test_connection.inventory \ ssh-pipelining ret=$? # 124 is EXIT_TIMEDOUT from gnu coreutils # 143 is 128+SIGTERM(15) from BusyBox if [[ $ret -ne 124 && $ret -ne 143 ]]; then echo "Expected to time out and we did not. Exiting with failure." exit 1 fi else ansible -m ping \ -e ansible_connection=ssh \ -e ansible_sshpass_prompt=notThis: \ -e ansible_password=foo \ -e ansible_user=definitelynotroot \ -i test_connection.inventory \ ssh-pipelining | grep 'customized password prompts' ret=$? [[ $ret -eq 0 ]] || exit $ret fi fi set -e # temporary work-around for issues due to new scp filename checking # https://github.com/ansible/ansible/issues/52640 if [[ "$(scp -T 2>&1)" == "usage: scp "* ]]; then # scp supports the -T option # work-around required scp_args=("-e" "ansible_scp_extra_args=-T") else # scp does not support the -T option # no work-around required # however we need to put something in the array to keep older versions of bash happy scp_args=("-e" "") fi # sftp ./posix.sh "$@" # scp ANSIBLE_SCP_IF_SSH=true ./posix.sh "$@" "${scp_args[@]}" # piped ANSIBLE_SSH_TRANSFER_METHOD=piped ./posix.sh "$@"
closed
ansible/ansible
https://github.com/ansible/ansible
74,274
No python interpreters found with auto_silent on ESXi host
### Summary Same issue reported here: https://github.com/ansible/ansible/issues/67266 Which was closed because it was "just a warning", but it is a bug because the warning occurs even when auto_silent is used. Just need to add an "if is_silent" check to this warning. Ansible version: 2.9.11 ### Issue Type Bug Report ### Component Name interpreter_discovery ### Ansible Version ```console $ ansible --version ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 8.2, ESXi 6.7 and 7.0 ### Steps to Reproduce Run setup against an ESXi host with auto_silent interpreter; see warning. ### Expected Results No warning about interpreter failure when silent mode is active. ### Actual Results ```console Warning about no python interpreter when silent mode is active. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74274
https://github.com/ansible/ansible/pull/74509
b0389c7f11a5afef8f35d1ea7bed39a7dc86b7be
4627c30b2e269a91a5f81f7d4178e9545026c517
2021-04-14T09:33:47Z
python
2021-05-04T16:01:49Z
changelogs/fragments/74274_interpreter_discovery.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,274
No python interpreters found with auto_silent on ESXi host
### Summary Same issue reported here: https://github.com/ansible/ansible/issues/67266 Which was closed because it was "just a warning", but it is a bug because the warning occurs even when auto_silent is used. Just need to add an "if is_silent" check to this warning. Ansible version: 2.9.11 ### Issue Type Bug Report ### Component Name interpreter_discovery ### Ansible Version ```console $ ansible --version ansible 2.9.11 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ### Configuration ```console $ ansible-config dump --only-changed ``` ### OS / Environment RHEL 8.2, ESXi 6.7 and 7.0 ### Steps to Reproduce Run setup against an ESXi host with auto_silent interpreter; see warning. ### Expected Results No warning about interpreter failure when silent mode is active. ### Actual Results ```console Warning about no python interpreter when silent mode is active. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74274
https://github.com/ansible/ansible/pull/74509
b0389c7f11a5afef8f35d1ea7bed39a7dc86b7be
4627c30b2e269a91a5f81f7d4178e9545026c517
2021-04-14T09:33:47Z
python
2021-05-04T16:01:49Z
lib/ansible/executor/interpreter_discovery.py
# Copyright: (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import bisect import json import pkgutil import re from ansible import constants as C from ansible.module_utils._text import to_native, to_text from ansible.module_utils.distro import LinuxDistribution from ansible.utils.display import Display from ansible.utils.plugin_docs import get_versioned_doclink from distutils.version import LooseVersion from traceback import format_exc display = Display() foundre = re.compile(r'(?s)PLATFORM[\r\n]+(.*)FOUND(.*)ENDFOUND') class InterpreterDiscoveryRequiredError(Exception): def __init__(self, message, interpreter_name, discovery_mode): super(InterpreterDiscoveryRequiredError, self).__init__(message) self.interpreter_name = interpreter_name self.discovery_mode = discovery_mode def __str__(self): return self.message def __repr__(self): # TODO: proper repr impl return self.message def discover_interpreter(action, interpreter_name, discovery_mode, task_vars): # interpreter discovery is a 2-step process with the target. First, we use a simple shell-agnostic bootstrap to # get the system type from uname, and find any random Python that can get us the info we need. For supported # target OS types, we'll dispatch a Python script that calls plaform.dist() (for older platforms, where available) # and brings back /etc/os-release (if present). The proper Python path is looked up in a table of known # distros/versions with included Pythons; if nothing is found, depending on the discovery mode, either the # default fallback of /usr/bin/python is used (if we know it's there), or discovery fails. # FUTURE: add logical equivalence for "python3" in the case of py3-only modules? if interpreter_name != 'python': raise ValueError('Interpreter discovery not supported for {0}'.format(interpreter_name)) host = task_vars.get('inventory_hostname', 'unknown') res = None platform_type = 'unknown' found_interpreters = [u'/usr/bin/python'] # fallback value is_auto_legacy = discovery_mode.startswith('auto_legacy') is_silent = discovery_mode.endswith('_silent') try: platform_python_map = C.config.get_config_value('INTERPRETER_PYTHON_DISTRO_MAP', variables=task_vars) bootstrap_python_list = C.config.get_config_value('INTERPRETER_PYTHON_FALLBACK', variables=task_vars) display.vvv(msg=u"Attempting {0} interpreter discovery".format(interpreter_name), host=host) # not all command -v impls accept a list of commands, so we have to call it once per python command_list = ["command -v '%s'" % py for py in bootstrap_python_list] shell_bootstrap = "echo PLATFORM; uname; echo FOUND; {0}; echo ENDFOUND".format('; '.join(command_list)) # FUTURE: in most cases we probably don't want to use become, but maybe sometimes we do? res = action._low_level_execute_command(shell_bootstrap, sudoable=False) raw_stdout = res.get('stdout', u'') match = foundre.match(raw_stdout) if not match: display.debug(u'raw interpreter discovery output: {0}'.format(raw_stdout), host=host) raise ValueError('unexpected output from Python interpreter discovery') platform_type = match.groups()[0].lower().strip() found_interpreters = [interp.strip() for interp in match.groups()[1].splitlines() if interp.startswith('/')] display.debug(u"found interpreters: {0}".format(found_interpreters), host=host) if not found_interpreters: action._discovery_warnings.append(u'No python interpreters found for host {0} (tried {1})'.format(host, bootstrap_python_list)) # this is lame, but returning None or throwing an exception is uglier return u'/usr/bin/python' if platform_type != 'linux': raise NotImplementedError('unsupported platform for extended discovery: {0}'.format(to_native(platform_type))) platform_script = pkgutil.get_data('ansible.executor.discovery', 'python_target.py') # FUTURE: respect pipelining setting instead of just if the connection supports it? if action._connection.has_pipelining: res = action._low_level_execute_command(found_interpreters[0], sudoable=False, in_data=platform_script) else: # FUTURE: implement on-disk case (via script action or ?) raise NotImplementedError('pipelining support required for extended interpreter discovery') platform_info = json.loads(res.get('stdout')) distro, version = _get_linux_distro(platform_info) if not distro or not version: raise NotImplementedError('unable to get Linux distribution/version info') version_map = platform_python_map.get(distro.lower().strip()) if not version_map: raise NotImplementedError('unsupported Linux distribution: {0}'.format(distro)) platform_interpreter = to_text(_version_fuzzy_match(version, version_map), errors='surrogate_or_strict') # provide a transition period for hosts that were using /usr/bin/python previously (but shouldn't have been) if is_auto_legacy: if platform_interpreter != u'/usr/bin/python' and u'/usr/bin/python' in found_interpreters: # FIXME: support comments in sivel's deprecation scanner so we can get reminded on this if not is_silent: action._discovery_deprecation_warnings.append(dict( msg=u"Distribution {0} {1} on host {2} should use {3}, but is using " u"/usr/bin/python for backward compatibility with prior Ansible releases. " u"A future Ansible release will default to using the discovered platform " u"python for this host. See {4} for more information" .format(distro, version, host, platform_interpreter, get_versioned_doclink('reference_appendices/interpreter_discovery.html')), version='2.12')) return u'/usr/bin/python' if platform_interpreter not in found_interpreters: if platform_interpreter not in bootstrap_python_list: # sanity check to make sure we looked for it if not is_silent: action._discovery_warnings \ .append(u"Platform interpreter {0} on host {1} is missing from bootstrap list" .format(platform_interpreter, host)) if not is_silent: action._discovery_warnings \ .append(u"Distribution {0} {1} on host {2} should use {3}, but is using {4}, since the " u"discovered platform python interpreter was not present. See {5} " u"for more information." .format(distro, version, host, platform_interpreter, found_interpreters[0], get_versioned_doclink('reference_appendices/interpreter_discovery.html'))) return found_interpreters[0] return platform_interpreter except NotImplementedError as ex: display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host) except Exception as ex: if not is_silent: display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex))) display.debug(msg=u'Interpreter discovery traceback:\n{0}'.format(to_text(format_exc())), host=host) if res and res.get('stderr'): display.vvv(msg=u'Interpreter discovery remote stderr:\n{0}'.format(to_text(res.get('stderr'))), host=host) if not is_silent: action._discovery_warnings \ .append(u"Platform {0} on host {1} is using the discovered Python interpreter at {2}, but future installation of " u"another Python interpreter could change the meaning of that path. See {3} " u"for more information." .format(platform_type, host, found_interpreters[0], get_versioned_doclink('reference_appendices/interpreter_discovery.html'))) return found_interpreters[0] def _get_linux_distro(platform_info): dist_result = platform_info.get('platform_dist_result', []) if len(dist_result) == 3 and any(dist_result): return dist_result[0], dist_result[1] osrelease_content = platform_info.get('osrelease_content') if not osrelease_content: return u'', u'' osr = LinuxDistribution._parse_os_release_content(osrelease_content) return osr.get('id', u''), osr.get('version_id', u'') def _version_fuzzy_match(version, version_map): # try exact match first res = version_map.get(version) if res: return res sorted_looseversions = sorted([LooseVersion(v) for v in version_map.keys()]) find_looseversion = LooseVersion(version) # slot match; return nearest previous version we're newer than kpos = bisect.bisect(sorted_looseversions, find_looseversion) if kpos == 0: # older than everything in the list, return the oldest version # TODO: warning-worthy? return version_map.get(sorted_looseversions[0].vstring) # TODO: is "past the end of the list" warning-worthy too (at least if it's not a major version match)? # return the next-oldest entry that we're newer than... return version_map.get(sorted_looseversions[kpos - 1].vstring)
closed
ansible/ansible
https://github.com/ansible/ansible
74,404
ansible-inventory --list --toml fails with: Unexpected Exception, this is probably a bug: '7.2'
### Summary When I try to run `ansible-inventory --list --toml -vvv` It fails with this exception: ``` ansible-inventory 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible-inventory python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Using /home/user/ansible/ansible.cfg as config file host_list declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method script declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method auto declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method Parsed /home/user/ansible/inventory/hosts inventory source with ini plugin ERROR! Unexpected Exception, this is probably a bug: '7.2' the full traceback was: Traceback (most recent call last): File "/home/user/.local/bin/ansible-inventory", line 123, in <module> exit_code = cli.run() File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 151, in run results = self.dump(results) File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 181, in dump results = toml_dumps(stuff) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 72, in dumps sections[section], section) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 193, in dump_sections if not isinstance(o[section], dict): KeyError: '7.2' ``` `ansible-inventory --list` without `--toml` works fine. Inventory file has ini format. ### Issue Type Bug Report ### Component Name ansible-inventory ### Ansible Version ```console ansible 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] ``` ### Configuration ```console ANSIBLE_NOCOWS(/home/user/ansible/ansible.cfg) = True ANSIBLE_PIPELINING(/home/user/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/user/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r ANSIBLE_SSH_RETRIES(/home/user/ansible/ansible.cfg) = 1 CACHE_PLUGIN(/home/user/ansible/ansible.cfg) = memory DEFAULT_FORKS(/home/user/ansible/ansible.cfg) = 20 DEFAULT_GATHERING(/home/user/ansible/ansible.cfg) = implicit DEFAULT_HOST_LIST(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible/ansible.cfg) = jinja2.ext.do,jinja2.ext.loopcontrols DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible/ansible.cfg) = True DEFAULT_MANAGED_STR(/home/user/ansible/ansible.cfg) = Ansible managed: {file} modified by {uid} on {host} DEFAULT_MODULE_NAME(/home/user/ansible/ansible.cfg) = shell DEFAULT_POLL_INTERVAL(/home/user/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/roles', '/home/user/ansible/roles-sha DEFAULT_TIMEOUT(/home/user/ansible/ansible.cfg) = 10 DEFAULT_TRANSPORT(/home/user/ansible/ansible.cfg) = smart DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible/ansible.cfg) = ********** DUPLICATE_YAML_DICT_KEY(/home/user/ansible/ansible.cfg) = False RETRY_FILES_ENABLED(/home/user/ansible/ansible.cfg) = False ``` ### OS / Environment Ubuntu 18.04 Python 3.6.9 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ``` ansible-inventory --list --toml -vvv ``` ### Expected Results Inventory in toml format. ### Actual Results ```console ERROR! Unexpected Exception, this is probably a bug: '7.2' to see the full traceback, use -vvv ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74404
https://github.com/ansible/ansible/pull/74486
68e9e1c999a4181ef88eaf779a3c30ccf82bfa81
38dd49eb005dc784aad4809a9ee98dc84ff60eec
2021-04-23T18:44:20Z
python
2021-05-05T08:42:52Z
changelogs/fragments/74404_ansible_inventory.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,404
ansible-inventory --list --toml fails with: Unexpected Exception, this is probably a bug: '7.2'
### Summary When I try to run `ansible-inventory --list --toml -vvv` It fails with this exception: ``` ansible-inventory 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible-inventory python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Using /home/user/ansible/ansible.cfg as config file host_list declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method script declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method auto declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method Parsed /home/user/ansible/inventory/hosts inventory source with ini plugin ERROR! Unexpected Exception, this is probably a bug: '7.2' the full traceback was: Traceback (most recent call last): File "/home/user/.local/bin/ansible-inventory", line 123, in <module> exit_code = cli.run() File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 151, in run results = self.dump(results) File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 181, in dump results = toml_dumps(stuff) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 72, in dumps sections[section], section) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 193, in dump_sections if not isinstance(o[section], dict): KeyError: '7.2' ``` `ansible-inventory --list` without `--toml` works fine. Inventory file has ini format. ### Issue Type Bug Report ### Component Name ansible-inventory ### Ansible Version ```console ansible 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] ``` ### Configuration ```console ANSIBLE_NOCOWS(/home/user/ansible/ansible.cfg) = True ANSIBLE_PIPELINING(/home/user/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/user/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r ANSIBLE_SSH_RETRIES(/home/user/ansible/ansible.cfg) = 1 CACHE_PLUGIN(/home/user/ansible/ansible.cfg) = memory DEFAULT_FORKS(/home/user/ansible/ansible.cfg) = 20 DEFAULT_GATHERING(/home/user/ansible/ansible.cfg) = implicit DEFAULT_HOST_LIST(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible/ansible.cfg) = jinja2.ext.do,jinja2.ext.loopcontrols DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible/ansible.cfg) = True DEFAULT_MANAGED_STR(/home/user/ansible/ansible.cfg) = Ansible managed: {file} modified by {uid} on {host} DEFAULT_MODULE_NAME(/home/user/ansible/ansible.cfg) = shell DEFAULT_POLL_INTERVAL(/home/user/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/roles', '/home/user/ansible/roles-sha DEFAULT_TIMEOUT(/home/user/ansible/ansible.cfg) = 10 DEFAULT_TRANSPORT(/home/user/ansible/ansible.cfg) = smart DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible/ansible.cfg) = ********** DUPLICATE_YAML_DICT_KEY(/home/user/ansible/ansible.cfg) = False RETRY_FILES_ENABLED(/home/user/ansible/ansible.cfg) = False ``` ### OS / Environment Ubuntu 18.04 Python 3.6.9 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ``` ansible-inventory --list --toml -vvv ``` ### Expected Results Inventory in toml format. ### Actual Results ```console ERROR! Unexpected Exception, this is probably a bug: '7.2' to see the full traceback, use -vvv ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74404
https://github.com/ansible/ansible/pull/74486
68e9e1c999a4181ef88eaf779a3c30ccf82bfa81
38dd49eb005dc784aad4809a9ee98dc84ff60eec
2021-04-23T18:44:20Z
python
2021-05-05T08:42:52Z
lib/ansible/cli/inventory.py
# Copyright: (c) 2017, Brian Coca <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys import argparse from operator import attrgetter from ansible import constants as C from ansible import context from ansible.cli import CLI from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.module_utils._text import to_bytes, to_native from ansible.utils.vars import combine_vars from ansible.utils.display import Display from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path display = Display() INTERNAL_VARS = frozenset(['ansible_diff_mode', 'ansible_config_file', 'ansible_facts', 'ansible_forks', 'ansible_inventory_sources', 'ansible_limit', 'ansible_playbook_python', 'ansible_run_tags', 'ansible_skip_tags', 'ansible_verbosity', 'ansible_version', 'inventory_dir', 'inventory_file', 'inventory_hostname', 'inventory_hostname_short', 'groups', 'group_names', 'omit', 'playbook_dir', ]) class InventoryCLI(CLI): ''' used to display or dump the configured inventory as Ansible sees it ''' ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list', 'group': 'The name of a group in the inventory, relevant when using --graph', } def __init__(self, args): super(InventoryCLI, self).__init__(args) self.vm = None self.loader = None self.inventory = None def init_parser(self): super(InventoryCLI, self).init_parser( usage='usage: %prog [options] [host|group]', epilog='Show Ansible inventory information, by default it uses the inventory script JSON format') opt_help.add_inventory_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_runtask_options(self.parser) # remove unused default options self.parser.add_argument('-l', '--limit', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument, nargs='?') self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument) self.parser.add_argument('args', metavar='host|group', nargs='?') # Actions action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!") action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script') action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script') action_group.add_argument("--graph", action="store_true", default=False, dest='graph', help='create inventory graph, if supplying pattern it must be a valid group name') self.parser.add_argument_group(action_group) # graph self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml', help='Use YAML format instead of default JSON, ignored for --graph') self.parser.add_argument('--toml', action='store_true', default=False, dest='toml', help='Use TOML format instead of default JSON, ignored for --graph') self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars', help='Add vars to graph display, ignored unless used with --graph') # list self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export', help="When doing an --list, represent in a way that is optimized for export," "not as an accurate representation of how Ansible has processed it") self.parser.add_argument('--output', default=None, dest='output_file', help="When doing --list, send the inventory to a file instead of to the screen") # self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins', # help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/") def post_process_args(self, options): options = super(InventoryCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options) # there can be only one! and, at least, one! used = 0 for opt in (options.list, options.host, options.graph): if opt: used += 1 if used == 0: raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.") elif used > 1: raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.") # set host pattern to default if not supplied if options.args: options.pattern = options.args else: options.pattern = 'all' return options def run(self): super(InventoryCLI, self).run() # Initialize needed objects self.loader, self.inventory, self.vm = self._play_prereqs() results = None if context.CLIARGS['host']: hosts = self.inventory.get_hosts(context.CLIARGS['host']) if len(hosts) != 1: raise AnsibleOptionsError("You must pass a single valid host to --host parameter") myvars = self._get_host_variables(host=hosts[0]) # FIXME: should we template first? results = self.dump(myvars) elif context.CLIARGS['graph']: results = self.inventory_graph() elif context.CLIARGS['list']: top = self._get_group('all') if context.CLIARGS['yaml']: results = self.yaml_inventory(top) elif context.CLIARGS['toml']: results = self.toml_inventory(top) else: results = self.json_inventory(top) results = self.dump(results) if results: outfile = context.CLIARGS['output_file'] if outfile is None: # FIXME: pager? display.display(results) else: try: with open(to_bytes(outfile), 'wt') as f: f.write(results) except (OSError, IOError) as e: raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e))) sys.exit(0) sys.exit(1) @staticmethod def dump(stuff): if context.CLIARGS['yaml']: import yaml from ansible.parsing.yaml.dumper import AnsibleDumper results = yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False) elif context.CLIARGS['toml']: from ansible.plugins.inventory.toml import toml_dumps, HAS_TOML if not HAS_TOML: raise AnsibleError( 'The python "toml" library is required when using the TOML output format' ) results = toml_dumps(stuff) else: import json from ansible.parsing.ajson import AnsibleJSONEncoder try: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True) except TypeError as e: results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=False, indent=4, preprocess_unsafe=True) display.warning("Could not sort JSON output due to issues while sorting keys: %s" % to_native(e)) return results def _get_group_variables(self, group): # get info from inventory source res = group.get_vars() # Always load vars plugins res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all')) if context.CLIARGS['basedir']: res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all')) if group.priority != 1: res['ansible_group_priority'] = group.priority return self._remove_internal(res) def _get_host_variables(self, host): if context.CLIARGS['export']: # only get vars defined directly host hostvars = host.get_vars() # Always load vars plugins hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all')) if context.CLIARGS['basedir']: hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all')) else: # get all vars flattened by host, but skip magic hostvars hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all') return self._remove_internal(hostvars) def _get_group(self, gname): group = self.inventory.groups.get(gname) return group @staticmethod def _remove_internal(dump): for internal in INTERNAL_VARS: if internal in dump: del dump[internal] return dump @staticmethod def _remove_empty(dump): # remove empty keys for x in ('hosts', 'vars', 'children'): if x in dump and not dump[x]: del dump[x] @staticmethod def _show_vars(dump, depth): result = [] for (name, val) in sorted(dump.items()): result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth)) return result @staticmethod def _graph_name(name, depth=0): if depth: name = " |" * (depth) + "--%s" % name return name def _graph_group(self, group, depth=0): result = [self._graph_name('@%s:' % group.name, depth)] depth = depth + 1 for kid in sorted(group.child_groups, key=attrgetter('name')): result.extend(self._graph_group(kid, depth)) if group.name != 'all': for host in sorted(group.hosts, key=attrgetter('name')): result.append(self._graph_name(host.name, depth)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_host_variables(host), depth + 1)) if context.CLIARGS['show_vars']: result.extend(self._show_vars(self._get_group_variables(group), depth)) return result def inventory_graph(self): start_at = self._get_group(context.CLIARGS['pattern']) if start_at: return '\n'.join(self._graph_group(start_at)) else: raise AnsibleOptionsError("Pattern must be valid group name when using --graph") def json_inventory(self, top): seen = set() def format_group(group): results = {} results[group.name] = {} if group.name != 'all': results[group.name]['hosts'] = [h.name for h in sorted(group.hosts, key=attrgetter('name'))] results[group.name]['children'] = [] for subgroup in sorted(group.child_groups, key=attrgetter('name')): results[group.name]['children'].append(subgroup.name) if subgroup.name not in seen: results.update(format_group(subgroup)) seen.add(subgroup.name) if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty(results[group.name]) if not results[group.name]: del results[group.name] return results results = format_group(top) # populate meta results['_meta'] = {'hostvars': {}} hosts = self.inventory.get_hosts() for host in hosts: hvars = self._get_host_variables(host) if hvars: results['_meta']['hostvars'][host.name] = hvars return results def yaml_inventory(self, top): seen = [] def format_group(group): results = {} # initialize group + vars results[group.name] = {} # subgroups results[group.name]['children'] = {} for subgroup in sorted(group.child_groups, key=attrgetter('name')): if subgroup.name != 'all': results[group.name]['children'].update(format_group(subgroup)) # hosts for group results[group.name]['hosts'] = {} if group.name != 'all': for h in sorted(group.hosts, key=attrgetter('name')): myvars = {} if h.name not in seen: # avoid defining host vars more than once seen.append(h.name) myvars = self._get_host_variables(host=h) results[group.name]['hosts'][h.name] = myvars if context.CLIARGS['export']: gvars = self._get_group_variables(group) if gvars: results[group.name]['vars'] = gvars self._remove_empty(results[group.name]) return results return format_group(top) def toml_inventory(self, top): seen = set() has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped')) def format_group(group): results = {} results[group.name] = {} results[group.name]['children'] = [] for subgroup in sorted(group.child_groups, key=attrgetter('name')): if subgroup.name == 'ungrouped' and not has_ungrouped: continue if group.name != 'all': results[group.name]['children'].append(subgroup.name) results.update(format_group(subgroup)) if group.name != 'all': for host in sorted(group.hosts, key=attrgetter('name')): if host.name not in seen: seen.add(host.name) host_vars = self._get_host_variables(host=host) else: host_vars = {} try: results[group.name]['hosts'][host.name] = host_vars except KeyError: results[group.name]['hosts'] = {host.name: host_vars} if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty(results[group.name]) if not results[group.name]: del results[group.name] return results results = format_group(top) return results
closed
ansible/ansible
https://github.com/ansible/ansible
74,404
ansible-inventory --list --toml fails with: Unexpected Exception, this is probably a bug: '7.2'
### Summary When I try to run `ansible-inventory --list --toml -vvv` It fails with this exception: ``` ansible-inventory 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible-inventory python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Using /home/user/ansible/ansible.cfg as config file host_list declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method script declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method auto declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method Parsed /home/user/ansible/inventory/hosts inventory source with ini plugin ERROR! Unexpected Exception, this is probably a bug: '7.2' the full traceback was: Traceback (most recent call last): File "/home/user/.local/bin/ansible-inventory", line 123, in <module> exit_code = cli.run() File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 151, in run results = self.dump(results) File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 181, in dump results = toml_dumps(stuff) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 72, in dumps sections[section], section) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 193, in dump_sections if not isinstance(o[section], dict): KeyError: '7.2' ``` `ansible-inventory --list` without `--toml` works fine. Inventory file has ini format. ### Issue Type Bug Report ### Component Name ansible-inventory ### Ansible Version ```console ansible 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] ``` ### Configuration ```console ANSIBLE_NOCOWS(/home/user/ansible/ansible.cfg) = True ANSIBLE_PIPELINING(/home/user/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/user/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r ANSIBLE_SSH_RETRIES(/home/user/ansible/ansible.cfg) = 1 CACHE_PLUGIN(/home/user/ansible/ansible.cfg) = memory DEFAULT_FORKS(/home/user/ansible/ansible.cfg) = 20 DEFAULT_GATHERING(/home/user/ansible/ansible.cfg) = implicit DEFAULT_HOST_LIST(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible/ansible.cfg) = jinja2.ext.do,jinja2.ext.loopcontrols DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible/ansible.cfg) = True DEFAULT_MANAGED_STR(/home/user/ansible/ansible.cfg) = Ansible managed: {file} modified by {uid} on {host} DEFAULT_MODULE_NAME(/home/user/ansible/ansible.cfg) = shell DEFAULT_POLL_INTERVAL(/home/user/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/roles', '/home/user/ansible/roles-sha DEFAULT_TIMEOUT(/home/user/ansible/ansible.cfg) = 10 DEFAULT_TRANSPORT(/home/user/ansible/ansible.cfg) = smart DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible/ansible.cfg) = ********** DUPLICATE_YAML_DICT_KEY(/home/user/ansible/ansible.cfg) = False RETRY_FILES_ENABLED(/home/user/ansible/ansible.cfg) = False ``` ### OS / Environment Ubuntu 18.04 Python 3.6.9 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ``` ansible-inventory --list --toml -vvv ``` ### Expected Results Inventory in toml format. ### Actual Results ```console ERROR! Unexpected Exception, this is probably a bug: '7.2' to see the full traceback, use -vvv ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74404
https://github.com/ansible/ansible/pull/74486
68e9e1c999a4181ef88eaf779a3c30ccf82bfa81
38dd49eb005dc784aad4809a9ee98dc84ff60eec
2021-04-23T18:44:20Z
python
2021-05-05T08:42:52Z
test/integration/targets/ansible-inventory/files/invalid_sample.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,404
ansible-inventory --list --toml fails with: Unexpected Exception, this is probably a bug: '7.2'
### Summary When I try to run `ansible-inventory --list --toml -vvv` It fails with this exception: ``` ansible-inventory 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible-inventory python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Using /home/user/ansible/ansible.cfg as config file host_list declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method script declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method auto declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method Parsed /home/user/ansible/inventory/hosts inventory source with ini plugin ERROR! Unexpected Exception, this is probably a bug: '7.2' the full traceback was: Traceback (most recent call last): File "/home/user/.local/bin/ansible-inventory", line 123, in <module> exit_code = cli.run() File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 151, in run results = self.dump(results) File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 181, in dump results = toml_dumps(stuff) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 72, in dumps sections[section], section) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 193, in dump_sections if not isinstance(o[section], dict): KeyError: '7.2' ``` `ansible-inventory --list` without `--toml` works fine. Inventory file has ini format. ### Issue Type Bug Report ### Component Name ansible-inventory ### Ansible Version ```console ansible 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] ``` ### Configuration ```console ANSIBLE_NOCOWS(/home/user/ansible/ansible.cfg) = True ANSIBLE_PIPELINING(/home/user/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/user/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r ANSIBLE_SSH_RETRIES(/home/user/ansible/ansible.cfg) = 1 CACHE_PLUGIN(/home/user/ansible/ansible.cfg) = memory DEFAULT_FORKS(/home/user/ansible/ansible.cfg) = 20 DEFAULT_GATHERING(/home/user/ansible/ansible.cfg) = implicit DEFAULT_HOST_LIST(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible/ansible.cfg) = jinja2.ext.do,jinja2.ext.loopcontrols DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible/ansible.cfg) = True DEFAULT_MANAGED_STR(/home/user/ansible/ansible.cfg) = Ansible managed: {file} modified by {uid} on {host} DEFAULT_MODULE_NAME(/home/user/ansible/ansible.cfg) = shell DEFAULT_POLL_INTERVAL(/home/user/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/roles', '/home/user/ansible/roles-sha DEFAULT_TIMEOUT(/home/user/ansible/ansible.cfg) = 10 DEFAULT_TRANSPORT(/home/user/ansible/ansible.cfg) = smart DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible/ansible.cfg) = ********** DUPLICATE_YAML_DICT_KEY(/home/user/ansible/ansible.cfg) = False RETRY_FILES_ENABLED(/home/user/ansible/ansible.cfg) = False ``` ### OS / Environment Ubuntu 18.04 Python 3.6.9 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ``` ansible-inventory --list --toml -vvv ``` ### Expected Results Inventory in toml format. ### Actual Results ```console ERROR! Unexpected Exception, this is probably a bug: '7.2' to see the full traceback, use -vvv ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74404
https://github.com/ansible/ansible/pull/74486
68e9e1c999a4181ef88eaf779a3c30ccf82bfa81
38dd49eb005dc784aad4809a9ee98dc84ff60eec
2021-04-23T18:44:20Z
python
2021-05-05T08:42:52Z
test/integration/targets/ansible-inventory/files/valid_sample.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,404
ansible-inventory --list --toml fails with: Unexpected Exception, this is probably a bug: '7.2'
### Summary When I try to run `ansible-inventory --list --toml -vvv` It fails with this exception: ``` ansible-inventory 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible-inventory python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Using /home/user/ansible/ansible.cfg as config file host_list declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method script declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method auto declined parsing /home/user/ansible/inventory/hosts as it did not pass its verify_file() method Parsed /home/user/ansible/inventory/hosts inventory source with ini plugin ERROR! Unexpected Exception, this is probably a bug: '7.2' the full traceback was: Traceback (most recent call last): File "/home/user/.local/bin/ansible-inventory", line 123, in <module> exit_code = cli.run() File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 151, in run results = self.dump(results) File "/home/user/.local/lib/python3.6/site-packages/ansible/cli/inventory.py", line 181, in dump results = toml_dumps(stuff) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 72, in dumps sections[section], section) File "/home/user/.local/lib/python3.6/site-packages/toml/encoder.py", line 193, in dump_sections if not isinstance(o[section], dict): KeyError: '7.2' ``` `ansible-inventory --list` without `--toml` works fine. Inventory file has ini format. ### Issue Type Bug Report ### Component Name ansible-inventory ### Ansible Version ```console ansible 2.10.7 config file = /home/user/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/user/.local/lib/python3.6/site-packages/ansible executable location = /home/user/.local/bin/ansible python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] ``` ### Configuration ```console ANSIBLE_NOCOWS(/home/user/ansible/ansible.cfg) = True ANSIBLE_PIPELINING(/home/user/ansible/ansible.cfg) = True ANSIBLE_SSH_CONTROL_PATH(/home/user/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r ANSIBLE_SSH_RETRIES(/home/user/ansible/ansible.cfg) = 1 CACHE_PLUGIN(/home/user/ansible/ansible.cfg) = memory DEFAULT_FORKS(/home/user/ansible/ansible.cfg) = 20 DEFAULT_GATHERING(/home/user/ansible/ansible.cfg) = implicit DEFAULT_HOST_LIST(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/inventory'] DEFAULT_JINJA2_EXTENSIONS(/home/user/ansible/ansible.cfg) = jinja2.ext.do,jinja2.ext.loopcontrols DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/ansible/ansible.cfg) = True DEFAULT_MANAGED_STR(/home/user/ansible/ansible.cfg) = Ansible managed: {file} modified by {uid} on {host} DEFAULT_MODULE_NAME(/home/user/ansible/ansible.cfg) = shell DEFAULT_POLL_INTERVAL(/home/user/ansible/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/user/ansible/ansible.cfg) = ['/home/user/ansible/roles', '/home/user/ansible/roles-sha DEFAULT_TIMEOUT(/home/user/ansible/ansible.cfg) = 10 DEFAULT_TRANSPORT(/home/user/ansible/ansible.cfg) = smart DEFAULT_VAULT_PASSWORD_FILE(/home/user/ansible/ansible.cfg) = ********** DUPLICATE_YAML_DICT_KEY(/home/user/ansible/ansible.cfg) = False RETRY_FILES_ENABLED(/home/user/ansible/ansible.cfg) = False ``` ### OS / Environment Ubuntu 18.04 Python 3.6.9 ### Steps to Reproduce <!--- Paste example playbooks or commands between quotes below --> ``` ansible-inventory --list --toml -vvv ``` ### Expected Results Inventory in toml format. ### Actual Results ```console ERROR! Unexpected Exception, this is probably a bug: '7.2' to see the full traceback, use -vvv ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74404
https://github.com/ansible/ansible/pull/74486
68e9e1c999a4181ef88eaf779a3c30ccf82bfa81
38dd49eb005dc784aad4809a9ee98dc84ff60eec
2021-04-23T18:44:20Z
python
2021-05-05T08:42:52Z
test/integration/targets/ansible-inventory/tasks/main.yml
- name: "No command supplied" command: ansible-inventory ignore_errors: true register: result - assert: that: - result is failed - '"ERROR! No action selected, at least one of --host, --graph or --list needs to be specified." in result.stderr' - name: "test option: --list --export" command: ansible-inventory --list --export register: result - assert: that: - result is succeeded - name: "test option: --list --yaml --export" command: ansible-inventory --list --yaml --export register: result - assert: that: - result is succeeded - name: "test option: --list --output" command: ansible-inventory --list --output junk.txt register: result - name: stat output file stat: path: junk.txt register: st - assert: that: - result is succeeded - st.stat.exists - name: "test option: --graph" command: ansible-inventory --graph register: result - assert: that: - result is succeeded - name: "test option: --graph --vars" command: ansible-inventory --graph --vars register: result - assert: that: - result is succeeded - name: "test option: --graph with bad pattern" command: ansible-inventory --graph invalid ignore_errors: true register: result - assert: that: - result is failed - '"ERROR! Pattern must be valid group name when using --graph" in result.stderr' - name: "test option: --host localhost" command: ansible-inventory --host localhost register: result - assert: that: - result is succeeded - name: "test option: --host with invalid host" command: ansible-inventory --host invalid ignore_errors: true register: result - assert: that: - result is failed - '"ERROR! Could not match supplied host pattern, ignoring: invalid" in result.stderr'
closed
ansible/ansible
https://github.com/ansible/ansible
73,264
"pause" task with prompt does not work in Emacs shell buffer
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When this playbook is run in an Emacs shell buffer: ``` --- - hosts: all tasks: - pause: prompt='Foo' ``` It yields this error after the user is prompted and hits Enter: ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pause ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.4 config file = private-path-elided/ansible.cfg configured module search path = ['private-path-elided/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = private-path-elided/.virtualenvs/ansible/lib/python3.8/site-packages/ansible executable location = private-path-elided/.virtualenvs/ansible/bin/ansible python version = 3.8.6 (default, Sep 25 2020, 09:36:53) [GCC 10.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(private-path-elided/ansible.cfg) = True DEFAULT_HOST_LIST(privat-epath-elided/ansible.cfg) = ['private-path-elided/inventory'] DEFAULT_REMOTE_USER(private-path-elided/ansible.cfg) = root ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ubuntu 20.10 (Groovy) x86_64 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: all tasks: - pause: prompt='Foo' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I expect the playbook to exit cleanly after it pauses and I hit Enter. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> I get an error. <!--- Paste verbatim command output between quotes --> ```paste below PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [private-host-elided] TASK [pause] ******************************************************************* [pause] Foo: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* private-host-elided : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/73264
https://github.com/ansible/ansible/pull/74568
a6cc5088223a46c06753efe1c5db823a72d516b9
55b401a3e75597ca0a15e4ac52be30c5e429c6b8
2021-01-16T21:20:53Z
python
2021-05-06T19:09:26Z
changelogs/fragments/73264-pause-emacs.yml
closed
ansible/ansible
https://github.com/ansible/ansible
73,264
"pause" task with prompt does not work in Emacs shell buffer
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When this playbook is run in an Emacs shell buffer: ``` --- - hosts: all tasks: - pause: prompt='Foo' ``` It yields this error after the user is prompted and hits Enter: ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pause ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.4 config file = private-path-elided/ansible.cfg configured module search path = ['private-path-elided/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = private-path-elided/.virtualenvs/ansible/lib/python3.8/site-packages/ansible executable location = private-path-elided/.virtualenvs/ansible/bin/ansible python version = 3.8.6 (default, Sep 25 2020, 09:36:53) [GCC 10.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(private-path-elided/ansible.cfg) = True DEFAULT_HOST_LIST(privat-epath-elided/ansible.cfg) = ['private-path-elided/inventory'] DEFAULT_REMOTE_USER(private-path-elided/ansible.cfg) = root ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ubuntu 20.10 (Groovy) x86_64 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: all tasks: - pause: prompt='Foo' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I expect the playbook to exit cleanly after it pauses and I hit Enter. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> I get an error. <!--- Paste verbatim command output between quotes --> ```paste below PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [private-host-elided] TASK [pause] ******************************************************************* [pause] Foo: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* private-host-elided : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/73264
https://github.com/ansible/ansible/pull/74568
a6cc5088223a46c06753efe1c5db823a72d516b9
55b401a3e75597ca0a15e4ac52be30c5e429c6b8
2021-01-16T21:20:53Z
python
2021-05-06T19:09:26Z
lib/ansible/plugins/action/pause.py
# Copyright 2012, Tim Bielawa <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import datetime import signal import sys import termios import time import tty from os import ( getpgrp, isatty, tcgetpgrp, ) from ansible.errors import AnsibleError from ansible.module_utils._text import to_text, to_native from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import PY3 from ansible.plugins.action import ActionBase from ansible.utils.display import Display display = Display() try: import curses import io # Nest the try except since curses.error is not available if curses did not import try: curses.setupterm() HAS_CURSES = True except (curses.error, TypeError, io.UnsupportedOperation): HAS_CURSES = False except ImportError: HAS_CURSES = False if HAS_CURSES: MOVE_TO_BOL = curses.tigetstr('cr') CLEAR_TO_EOL = curses.tigetstr('el') else: MOVE_TO_BOL = b'\r' CLEAR_TO_EOL = b'\x1b[K' class AnsibleTimeoutExceeded(Exception): pass def timeout_handler(signum, frame): raise AnsibleTimeoutExceeded def clear_line(stdout): stdout.write(b'\x1b[%s' % MOVE_TO_BOL) stdout.write(b'\x1b[%s' % CLEAR_TO_EOL) def is_interactive(fd=None): if fd is None: return False if isatty(fd): # Compare the current process group to the process group associated # with terminal of the given file descriptor to determine if the process # is running in the background. return getpgrp() == tcgetpgrp(fd) else: return False class ActionModule(ActionBase): ''' pauses execution for a length or time, or until input is received ''' BYPASS_HOST_LOOP = True _VALID_ARGS = frozenset(('echo', 'minutes', 'prompt', 'seconds')) def run(self, tmp=None, task_vars=None): ''' run the pause action module ''' if task_vars is None: task_vars = dict() result = super(ActionModule, self).run(tmp, task_vars) del tmp # tmp no longer has any effect duration_unit = 'minutes' prompt = None seconds = None echo = True echo_prompt = '' result.update(dict( changed=False, rc=0, stderr='', stdout='', start=None, stop=None, delta=None, echo=echo )) # Should keystrokes be echoed to stdout? if 'echo' in self._task.args: try: echo = boolean(self._task.args['echo']) except TypeError as e: result['failed'] = True result['msg'] = to_native(e) return result # Add a note saying the output is hidden if echo is disabled if not echo: echo_prompt = ' (output is hidden)' # Is 'prompt' a key in 'args'? if 'prompt' in self._task.args: prompt = "[%s]\n%s%s:" % (self._task.get_name().strip(), self._task.args['prompt'], echo_prompt) else: # If no custom prompt is specified, set a default prompt prompt = "[%s]\n%s%s:" % (self._task.get_name().strip(), 'Press enter to continue, Ctrl+C to interrupt', echo_prompt) # Are 'minutes' or 'seconds' keys that exist in 'args'? if 'minutes' in self._task.args or 'seconds' in self._task.args: try: if 'minutes' in self._task.args: # The time() command operates in seconds so we need to # recalculate for minutes=X values. seconds = int(self._task.args['minutes']) * 60 else: seconds = int(self._task.args['seconds']) duration_unit = 'seconds' except ValueError as e: result['failed'] = True result['msg'] = u"non-integer value given for prompt duration:\n%s" % to_text(e) return result ######################################################################## # Begin the hard work! start = time.time() result['start'] = to_text(datetime.datetime.now()) result['user_input'] = b'' stdin_fd = None old_settings = None try: if seconds is not None: if seconds < 1: seconds = 1 # setup the alarm handler signal.signal(signal.SIGALRM, timeout_handler) signal.alarm(seconds) # show the timer and control prompts display.display("Pausing for %d seconds%s" % (seconds, echo_prompt)) display.display("(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)\r"), # show the prompt specified in the task if 'prompt' in self._task.args: display.display(prompt) else: display.display(prompt) # save the attributes on the existing (duped) stdin so # that we can restore them later after we set raw mode stdin_fd = None stdout_fd = None try: if PY3: stdin = self._connection._new_stdin.buffer stdout = sys.stdout.buffer else: stdin = self._connection._new_stdin stdout = sys.stdout stdin_fd = stdin.fileno() stdout_fd = stdout.fileno() except (ValueError, AttributeError): # ValueError: someone is using a closed file descriptor as stdin # AttributeError: someone is using a null file descriptor as stdin on windoze stdin = None interactive = is_interactive(stdin_fd) if interactive: # grab actual Ctrl+C sequence try: intr = termios.tcgetattr(stdin_fd)[6][termios.VINTR] except Exception: # unsupported/not present, use default intr = b'\x03' # value for Ctrl+C # get backspace sequences try: backspace = termios.tcgetattr(stdin_fd)[6][termios.VERASE] except Exception: backspace = [b'\x7f', b'\x08'] old_settings = termios.tcgetattr(stdin_fd) tty.setraw(stdin_fd) # Only set stdout to raw mode if it is a TTY. This is needed when redirecting # stdout to a file since a file cannot be set to raw mode. if isatty(stdout_fd): tty.setraw(stdout_fd) # Only echo input if no timeout is specified if not seconds and echo: new_settings = termios.tcgetattr(stdin_fd) new_settings[3] = new_settings[3] | termios.ECHO termios.tcsetattr(stdin_fd, termios.TCSANOW, new_settings) # flush the buffer to make sure no previous key presses # are read in below termios.tcflush(stdin, termios.TCIFLUSH) while True: if not interactive: if seconds is None: display.warning("Not waiting for response to prompt as stdin is not interactive") if seconds is not None: # Give the signal handler enough time to timeout time.sleep(seconds + 1) break try: key_pressed = stdin.read(1) if key_pressed == intr: # value for Ctrl+C clear_line(stdout) raise KeyboardInterrupt if not seconds: # read key presses and act accordingly if key_pressed in (b'\r', b'\n'): clear_line(stdout) break elif key_pressed in backspace: # delete a character if backspace is pressed result['user_input'] = result['user_input'][:-1] clear_line(stdout) if echo: stdout.write(result['user_input']) stdout.flush() else: result['user_input'] += key_pressed except KeyboardInterrupt: signal.alarm(0) display.display("Press 'C' to continue the play or 'A' to abort \r"), if self._c_or_a(stdin): clear_line(stdout) break clear_line(stdout) raise AnsibleError('user requested abort!') except AnsibleTimeoutExceeded: # this is the exception we expect when the alarm signal # fires, so we simply ignore it to move into the cleanup pass finally: # cleanup and save some information # restore the old settings for the duped stdin stdin_fd if not(None in (stdin_fd, old_settings)) and isatty(stdin_fd): termios.tcsetattr(stdin_fd, termios.TCSADRAIN, old_settings) duration = time.time() - start result['stop'] = to_text(datetime.datetime.now()) result['delta'] = int(duration) if duration_unit == 'minutes': duration = round(duration / 60.0, 2) else: duration = round(duration, 2) result['stdout'] = "Paused for %s %s" % (duration, duration_unit) result['user_input'] = to_text(result['user_input'], errors='surrogate_or_strict') return result def _c_or_a(self, stdin): while True: key_pressed = stdin.read(1) if key_pressed.lower() == b'a': return False elif key_pressed.lower() == b'c': return True
closed
ansible/ansible
https://github.com/ansible/ansible
73,264
"pause" task with prompt does not work in Emacs shell buffer
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When this playbook is run in an Emacs shell buffer: ``` --- - hosts: all tasks: - pause: prompt='Foo' ``` It yields this error after the user is prompted and hits Enter: ``` An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pause ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.4 config file = private-path-elided/ansible.cfg configured module search path = ['private-path-elided/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = private-path-elided/.virtualenvs/ansible/lib/python3.8/site-packages/ansible executable location = private-path-elided/.virtualenvs/ansible/bin/ansible python version = 3.8.6 (default, Sep 25 2020, 09:36:53) [GCC 10.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_PIPELINING(private-path-elided/ansible.cfg) = True DEFAULT_HOST_LIST(privat-epath-elided/ansible.cfg) = ['private-path-elided/inventory'] DEFAULT_REMOTE_USER(private-path-elided/ansible.cfg) = root ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Ubuntu 20.10 (Groovy) x86_64 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: all tasks: - pause: prompt='Foo' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I expect the playbook to exit cleanly after it pauses and I hit Enter. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> I get an error. <!--- Paste verbatim command output between quotes --> ```paste below PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [private-host-elided] TASK [pause] ******************************************************************* [pause] Foo: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: %b requires a bytes-like object, or an object that implements __bytes__, not 'NoneType' fatal: [private-host-elided]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* private-host-elided : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/73264
https://github.com/ansible/ansible/pull/74568
a6cc5088223a46c06753efe1c5db823a72d516b9
55b401a3e75597ca0a15e4ac52be30c5e429c6b8
2021-01-16T21:20:53Z
python
2021-05-06T19:09:26Z
test/units/plugins/action/test_pause.py
closed
ansible/ansible
https://github.com/ansible/ansible
74,191
ansible-galaxy doesn't handle rate limiting correctly
### Summary `ansible-galaxy collection install` makes a lot of requests to galaxy.ansible.com and can occasionally hit the rate limit. When that happens, galaxy.ansible.com will return a 520 http code and `ansible-galaxy` fails to download the collection and exits. Ideally when the client encounters a rate limiting http code (either 520, or 429), it should wait, slow down the request rate and try again rather than exiting. More information is available here: https://github.com/ansible/galaxy/issues/2429 ### Issue Type Bug Report ### Component Name ansible-galaxy ### Ansible Version ```console $ ansible --version root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible --version ansible 2.10.7 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ``` ### Configuration ```console $ ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ``` ### OS / Environment All ### Steps to Reproduce Run `ansible-galaxy collection install `amazon.aws`. If your internet is fast enough, this will occasionally fail when galaxy.ansible.com returns a 520 error code. ### Expected Results Collection should be installed. ### Actual Results ```console `ansible-galaxy` encounters a 429 or 520 http code and exits. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74191
https://github.com/ansible/ansible/pull/74240
51fd05e76b378f0ab463c71fa03bcf1b16eddc78
ee725846f070fc6b0dd79b5e8c5199ec652faf87
2021-04-08T15:20:23Z
python
2021-05-10T17:26:41Z
lib/ansible/galaxy/api.py
# (C) 2013, James Cammarata <[email protected]> # Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import collections import datetime import functools import hashlib import json import os import stat import tarfile import time import threading from ansible import constants as C from ansible.errors import AnsibleError from ansible.galaxy.user_agent import user_agent from ansible.module_utils.six import string_types from ansible.module_utils.six.moves.urllib.error import HTTPError from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlencode, urlparse from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.urls import open_url, prepare_multipart from ansible.utils.display import Display from ansible.utils.hashing import secure_hash_s from ansible.utils.path import makedirs_safe try: from urllib.parse import urlparse except ImportError: # Python 2 from urlparse import urlparse display = Display() _CACHE_LOCK = threading.Lock() def cache_lock(func): def wrapped(*args, **kwargs): with _CACHE_LOCK: return func(*args, **kwargs) return wrapped def g_connect(versions): """ Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the endpoint. :param versions: A list of API versions that the function supports. """ def decorator(method): def wrapped(self, *args, **kwargs): if not self._available_api_versions: display.vvvv("Initial connection to galaxy_server: %s" % self.api_server) # Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer # auth for Automation Hub. n_url = self.api_server error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url) if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/': n_url = 'https://galaxy.ansible.com/api/' try: data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True) except (AnsibleError, GalaxyError, ValueError, KeyError) as err: # Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API # root (not JSON, no 'available_versions') so try appending '/api/' if n_url.endswith('/api') or n_url.endswith('/api/'): raise # Let exceptions here bubble up but raise the original if this returns a 404 (/api/ wasn't found). n_url = _urljoin(n_url, '/api/') try: data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True) except GalaxyError as new_err: if new_err.http_code == 404: raise err raise if 'available_versions' not in data: raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available " "on %s" % (n_url, self.api_server)) # Update api_server to point to the "real" API root, which in this case could have been the configured # url + '/api/' appended. self.api_server = n_url # Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though # it isn't returned in the available_versions dict. available_versions = data.get('available_versions', {u'v1': u'v1/'}) if list(available_versions.keys()) == [u'v1']: available_versions[u'v2'] = u'v2/' self._available_api_versions = available_versions display.vvvv("Found API version '%s' with Galaxy server %s (%s)" % (', '.join(available_versions.keys()), self.name, self.api_server)) # Verify that the API versions the function works with are available on the server specified. available_versions = set(self._available_api_versions.keys()) common_versions = set(versions).intersection(available_versions) if not common_versions: raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s" % (method.__name__, ", ".join(versions), ", ".join(available_versions), self.name, self.api_server)) return method(self, *args, **kwargs) return wrapped return decorator def get_cache_id(url): """ Gets the cache ID for the URL specified. """ url_info = urlparse(url) port = None try: port = url_info.port except ValueError: pass # While the URL is probably invalid, let the caller figure that out when using it # Cannot use netloc because it could contain credentials if the server specified had them in there. return '%s:%s' % (url_info.hostname, port or '') @cache_lock def _load_cache(b_cache_path): """ Loads the cache file requested if possible. The file must not be world writable. """ cache_version = 1 if not os.path.isfile(b_cache_path): display.vvvv("Creating Galaxy API response cache file at '%s'" % to_text(b_cache_path)) with open(b_cache_path, 'w'): os.chmod(b_cache_path, 0o600) cache_mode = os.stat(b_cache_path).st_mode if cache_mode & stat.S_IWOTH: display.warning("Galaxy cache has world writable access (%s), ignoring it as a cache source." % to_text(b_cache_path)) return with open(b_cache_path, mode='rb') as fd: json_val = to_text(fd.read(), errors='surrogate_or_strict') try: cache = json.loads(json_val) except ValueError: cache = None if not isinstance(cache, dict) or cache.get('version', None) != cache_version: display.vvvv("Galaxy cache file at '%s' has an invalid version, clearing" % to_text(b_cache_path)) cache = {'version': cache_version} # Set the cache after we've cleared the existing entries with open(b_cache_path, mode='wb') as fd: fd.write(to_bytes(json.dumps(cache), errors='surrogate_or_strict')) return cache def _urljoin(*args): return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a) class GalaxyError(AnsibleError): """ Error for bad Galaxy server responses. """ def __init__(self, http_error, message): super(GalaxyError, self).__init__(message) self.http_code = http_error.code self.url = http_error.geturl() try: http_msg = to_text(http_error.read()) err_info = json.loads(http_msg) except (AttributeError, ValueError): err_info = {} url_split = self.url.split('/') if 'v2' in url_split: galaxy_msg = err_info.get('message', http_error.reason) code = err_info.get('code', 'Unknown') full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code) elif 'v3' in url_split: errors = err_info.get('errors', []) if not errors: errors = [{}] # Defaults are set below, we just need to make sure 1 error is present. message_lines = [] for error in errors: error_msg = error.get('detail') or error.get('title') or http_error.reason error_code = error.get('code') or 'Unknown' message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code) message_lines.append(message_line) full_error_msg = "%s %s" % (message, ', '.join(message_lines)) else: # v1 and unknown API endpoints galaxy_msg = err_info.get('default', http_error.reason) full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg) self.message = to_native(full_error_msg) # Keep the raw string results for the date. It's too complex to parse as a datetime object and the various APIs return # them in different formats. CollectionMetadata = collections.namedtuple('CollectionMetadata', ['namespace', 'name', 'created_str', 'modified_str']) class CollectionVersionMetadata: def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies): """ Contains common information about a collection on a Galaxy server to smooth through API differences for Collection and define a standard meta info for a collection. :param namespace: The namespace name. :param name: The collection name. :param version: The version that the metadata refers to. :param download_url: The URL to download the collection. :param artifact_sha256: The SHA256 of the collection artifact for later verification. :param dependencies: A dict of dependencies of the collection. """ self.namespace = namespace self.name = name self.version = version self.download_url = download_url self.artifact_sha256 = artifact_sha256 self.dependencies = dependencies @functools.total_ordering class GalaxyAPI: """ This class is meant to be used as a API client for an Ansible Galaxy server """ def __init__( self, galaxy, name, url, username=None, password=None, token=None, validate_certs=True, available_api_versions=None, clear_response_cache=False, no_cache=True, priority=float('inf'), ): self.galaxy = galaxy self.name = name self.username = username self.password = password self.token = token self.api_server = url self.validate_certs = validate_certs self._available_api_versions = available_api_versions or {} self._priority = priority b_cache_dir = to_bytes(C.config.get_config_value('GALAXY_CACHE_DIR'), errors='surrogate_or_strict') makedirs_safe(b_cache_dir, mode=0o700) self._b_cache_path = os.path.join(b_cache_dir, b'api.json') if clear_response_cache: with _CACHE_LOCK: if os.path.exists(self._b_cache_path): display.vvvv("Clearing cache file (%s)" % to_text(self._b_cache_path)) os.remove(self._b_cache_path) self._cache = None if not no_cache: self._cache = _load_cache(self._b_cache_path) display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs)) def __str__(self): # type: (GalaxyAPI) -> str """Render GalaxyAPI as a native string representation.""" return to_native(self.name) def __unicode__(self): # type: (GalaxyAPI) -> unicode """Render GalaxyAPI as a unicode/text string representation.""" return to_text(self.name) def __repr__(self): # type: (GalaxyAPI) -> str """Render GalaxyAPI as an inspectable string representation.""" return ( '<{instance!s} "{name!s}" @ {url!s} with priority {priority!s}>'. format( instance=self, name=self.name, priority=self._priority, url=self.api_server, ) ) def __lt__(self, other_galaxy_api): # type: (GalaxyAPI, GalaxyAPI) -> Union[bool, 'NotImplemented'] """Return whether the instance priority is higher than other.""" if not isinstance(other_galaxy_api, self.__class__): return NotImplemented return ( self._priority > other_galaxy_api._priority or self.name < self.name ) @property @g_connect(['v1', 'v2', 'v3']) def available_api_versions(self): # Calling g_connect will populate self._available_api_versions return self._available_api_versions def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None, cache=False): url_info = urlparse(url) cache_id = get_cache_id(url) if cache and self._cache: server_cache = self._cache.setdefault(cache_id, {}) iso_datetime_format = '%Y-%m-%dT%H:%M:%SZ' valid = False if url_info.path in server_cache: expires = datetime.datetime.strptime(server_cache[url_info.path]['expires'], iso_datetime_format) valid = datetime.datetime.utcnow() < expires if valid and not url_info.query: # Got a hit on the cache and we aren't getting a paginated response path_cache = server_cache[url_info.path] if path_cache.get('paginated'): if '/v3/' in url_info.path: res = {'links': {'next': None}} else: res = {'next': None} # Technically some v3 paginated APIs return in 'data' but the caller checks the keys for this so # always returning the cache under results is fine. res['results'] = [] for result in path_cache['results']: res['results'].append(result) else: res = path_cache['results'] return res elif not url_info.query: # The cache entry had expired or does not exist, start a new blank entry to be filled later. expires = datetime.datetime.utcnow() expires += datetime.timedelta(days=1) server_cache[url_info.path] = { 'expires': expires.strftime(iso_datetime_format), 'paginated': False, } headers = headers or {} self._add_auth_token(headers, url, required=auth_required) try: display.vvvv("Calling Galaxy at %s" % url) resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers, method=method, timeout=20, http_agent=user_agent(), follow_redirects='safe') except HTTPError as e: raise GalaxyError(e, error_context_msg) except Exception as e: raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e))) resp_data = to_text(resp.read(), errors='surrogate_or_strict') try: data = json.loads(resp_data) except ValueError: raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s" % (resp.url, to_native(resp_data))) if cache and self._cache: path_cache = self._cache[cache_id][url_info.path] # v3 can return data or results for paginated results. Scan the result so we can determine what to cache. paginated_key = None for key in ['data', 'results']: if key in data: paginated_key = key break if paginated_key: path_cache['paginated'] = True results = path_cache.setdefault('results', []) for result in data[paginated_key]: results.append(result) else: path_cache['results'] = data return data def _add_auth_token(self, headers, url, token_type=None, required=False): # Don't add the auth token if one is already present if 'Authorization' in headers: return if not self.token and required: raise AnsibleError("No access token or username set. A token can be set with --api-key " "or at {0}.".format(to_native(C.GALAXY_TOKEN_PATH))) if self.token: headers.update(self.token.headers()) @cache_lock def _set_cache(self): with open(self._b_cache_path, mode='wb') as fd: fd.write(to_bytes(json.dumps(self._cache), errors='surrogate_or_strict')) @g_connect(['v1']) def authenticate(self, github_token): """ Retrieve an authentication token """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/' args = urlencode({"github_token": github_token}) resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST", http_agent=user_agent()) data = json.loads(to_text(resp.read(), errors='surrogate_or_strict')) return data @g_connect(['v1']) def create_import_task(self, github_user, github_repo, reference=None, role_name=None): """ Post an import request """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/' args = { "github_user": github_user, "github_repo": github_repo, "github_reference": reference if reference else "" } if role_name: args['alternate_role_name'] = role_name elif github_repo.startswith('ansible-role'): args['alternate_role_name'] = github_repo[len('ansible-role') + 1:] data = self._call_galaxy(url, args=urlencode(args), method="POST") if data.get('results', None): return data['results'] return data @g_connect(['v1']) def get_import_task(self, task_id=None, github_user=None, github_repo=None): """ Check the status of an import task. """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") if task_id is not None: url = "%s?id=%d" % (url, task_id) elif github_user is not None and github_repo is not None: url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo) else: raise AnsibleError("Expected task_id or github_user and github_repo") data = self._call_galaxy(url) return data['results'] @g_connect(['v1']) def lookup_role_by_name(self, role_name, notify=True): """ Find a role by name. """ role_name = to_text(urlquote(to_bytes(role_name))) try: parts = role_name.split(".") user_name = ".".join(parts[0:-1]) role_name = parts[-1] if notify: display.display("- downloading role '%s', owned by %s" % (role_name, user_name)) except Exception: raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name) url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", "?owner__username=%s&name=%s" % (user_name, role_name)) data = self._call_galaxy(url) if len(data["results"]) != 0: return data["results"][0] return None @g_connect(['v1']) def fetch_role_related(self, related, role_id): """ Fetch the list of related items for the given role. The url comes from the 'related' field of the role. """ results = [] try: url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related, "?page_size=50") data = self._call_galaxy(url) results = data['results'] done = (data.get('next_link', None) is None) # https://github.com/ansible/ansible/issues/64355 # api_server contains part of the API path but next_link includes the /api part so strip it out. url_info = urlparse(self.api_server) base_url = "%s://%s/" % (url_info.scheme, url_info.netloc) while not done: url = _urljoin(base_url, data['next_link']) data = self._call_galaxy(url) results += data['results'] done = (data.get('next_link', None) is None) except Exception as e: display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s" % (role_id, related, to_text(e))) return results @g_connect(['v1']) def get_list(self, what): """ Fetch the list of items specified. """ try: url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size") data = self._call_galaxy(url) if "results" in data: results = data['results'] else: results = data done = True if "next" in data: done = (data.get('next_link', None) is None) while not done: url = _urljoin(self.api_server, data['next_link']) data = self._call_galaxy(url) results += data['results'] done = (data.get('next_link', None) is None) return results except Exception as error: raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error))) @g_connect(['v1']) def search_roles(self, search, **kwargs): search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?") if search: search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search))) tags = kwargs.get('tags', None) platforms = kwargs.get('platforms', None) page_size = kwargs.get('page_size', None) author = kwargs.get('author', None) if tags and isinstance(tags, string_types): tags = tags.split(',') search_url += '&tags_autocomplete=' + '+'.join(tags) if platforms and isinstance(platforms, string_types): platforms = platforms.split(',') search_url += '&platforms_autocomplete=' + '+'.join(platforms) if page_size: search_url += '&page_size=%s' % page_size if author: search_url += '&username_autocomplete=%s' % author data = self._call_galaxy(search_url) return data @g_connect(['v1']) def add_secret(self, source, github_user, github_repo, secret): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/' args = urlencode({ "source": source, "github_user": github_user, "github_repo": github_repo, "secret": secret }) data = self._call_galaxy(url, args=args, method="POST") return data @g_connect(['v1']) def list_secrets(self): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") data = self._call_galaxy(url, auth_required=True) return data @g_connect(['v1']) def remove_secret(self, secret_id): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/' data = self._call_galaxy(url, auth_required=True, method='DELETE') return data @g_connect(['v1']) def delete_role(self, github_user, github_repo): url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole", "?github_user=%s&github_repo=%s" % (github_user, github_repo)) data = self._call_galaxy(url, auth_required=True, method='DELETE') return data # Collection APIs # @g_connect(['v2', 'v3']) def publish_collection(self, collection_path): """ Publishes a collection to a Galaxy server and returns the import task URI. :param collection_path: The path to the collection tarball to publish. :return: The import task URI that contains the import results. """ display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server)) b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') if not os.path.exists(b_collection_path): raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path)) elif not tarfile.is_tarfile(b_collection_path): raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection " "build' to create a proper release artifact." % to_native(collection_path)) with open(b_collection_path, 'rb') as collection_tar: sha256 = secure_hash_s(collection_tar.read(), hash_func=hashlib.sha256) content_type, b_form_data = prepare_multipart( { 'sha256': sha256, 'file': { 'filename': b_collection_path, 'mime_type': 'application/octet-stream', }, } ) headers = { 'Content-type': content_type, 'Content-length': len(b_form_data), } if 'v3' in self.available_api_versions: n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/' else: n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/' resp = self._call_galaxy(n_url, args=b_form_data, headers=headers, method='POST', auth_required=True, error_context_msg='Error when publishing collection to %s (%s)' % (self.name, self.api_server)) return resp['task'] @g_connect(['v2', 'v3']) def wait_import_task(self, task_id, timeout=0): """ Waits until the import process on the Galaxy server has completed or the timeout is reached. :param task_id: The id of the import task to wait for. This can be parsed out of the return value for GalaxyAPI.publish_collection. :param timeout: The timeout in seconds, 0 is no timeout. """ state = 'waiting' data = None # Construct the appropriate URL per version if 'v3' in self.available_api_versions: full_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'imports/collections', task_id, '/') else: full_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collection-imports', task_id, '/') display.display("Waiting until Galaxy import task %s has completed" % full_url) start = time.time() wait = 2 while timeout == 0 or (time.time() - start) < timeout: try: data = self._call_galaxy(full_url, method='GET', auth_required=True, error_context_msg='Error when getting import task results at %s' % full_url) except GalaxyError as e: if e.http_code != 404: raise # The import job may not have started, and as such, the task url may not yet exist display.vvv('Galaxy import process has not started, wait %s seconds before trying again' % wait) time.sleep(wait) continue state = data.get('state', 'waiting') if data.get('finished_at', None): break display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again' % (state, wait)) time.sleep(wait) # poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds. wait = min(30, wait * 1.5) if state == 'waiting': raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % to_native(full_url)) for message in data.get('messages', []): level = message['level'] if level == 'error': display.error("Galaxy import error message: %s" % message['message']) elif level == 'warning': display.warning("Galaxy import warning message: %s" % message['message']) else: display.vvv("Galaxy import message: %s - %s" % (level, message['message'])) if state == 'failed': code = to_native(data['error'].get('code', 'UNKNOWN')) description = to_native( data['error'].get('description', "Unknown error, see %s for more details" % full_url)) raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code)) @g_connect(['v2', 'v3']) def get_collection_metadata(self, namespace, name): """ Gets the collection information from the Galaxy server about a specific Collection. :param namespace: The collection namespace. :param name: The collection name. return: CollectionMetadata about the collection. """ if 'v3' in self.available_api_versions: api_path = self.available_api_versions['v3'] field_map = [ ('created_str', 'created_at'), ('modified_str', 'updated_at'), ] else: api_path = self.available_api_versions['v2'] field_map = [ ('created_str', 'created'), ('modified_str', 'modified'), ] info_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, '/') error_context_msg = 'Error when getting the collection info for %s.%s from %s (%s)' \ % (namespace, name, self.name, self.api_server) data = self._call_galaxy(info_url, error_context_msg=error_context_msg) metadata = {} for name, api_field in field_map: metadata[name] = data.get(api_field, None) return CollectionMetadata(namespace, name, **metadata) @g_connect(['v2', 'v3']) def get_collection_version_metadata(self, namespace, name, version): """ Gets the collection information from the Galaxy server about a specific Collection version. :param namespace: The collection namespace. :param name: The collection name. :param version: Version of the collection to get the information for. :return: CollectionVersionMetadata about the collection at the version requested. """ api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2')) url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/'] n_collection_url = _urljoin(*url_paths) error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \ % (namespace, name, version, self.name, self.api_server) data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True) self._set_cache() return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'], data['download_url'], data['artifact']['sha256'], data['metadata']['dependencies']) @g_connect(['v2', 'v3']) def get_collection_versions(self, namespace, name): """ Gets a list of available versions for a collection on a Galaxy server. :param namespace: The collection namespace. :param name: The collection name. :return: A list of versions that are available. """ relative_link = False if 'v3' in self.available_api_versions: api_path = self.available_api_versions['v3'] pagination_path = ['links', 'next'] relative_link = True # AH pagination results are relative an not an absolute URI. else: api_path = self.available_api_versions['v2'] pagination_path = ['next'] versions_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions', '/') versions_url_info = urlparse(versions_url) # We should only rely on the cache if the collection has not changed. This may slow things down but it ensures # we are not waiting a day before finding any new collections that have been published. if self._cache: server_cache = self._cache.setdefault(get_cache_id(versions_url), {}) modified_cache = server_cache.setdefault('modified', {}) try: modified_date = self.get_collection_metadata(namespace, name).modified_str except GalaxyError as err: if err.http_code != 404: raise # No collection found, return an empty list to keep things consistent with the various APIs return [] cached_modified_date = modified_cache.get('%s.%s' % (namespace, name), None) if cached_modified_date != modified_date: modified_cache['%s.%s' % (namespace, name)] = modified_date if versions_url_info.path in server_cache: del server_cache[versions_url_info.path] self._set_cache() error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \ % (namespace, name, self.name, self.api_server) try: data = self._call_galaxy(versions_url, error_context_msg=error_context_msg, cache=True) except GalaxyError as err: if err.http_code != 404: raise # v3 doesn't raise a 404 so we need to mimick the empty response from APIs that do. return [] if 'data' in data: # v3 automation-hub is the only known API that uses `data` # since v3 pulp_ansible does not, we cannot rely on version # to indicate which key to use results_key = 'data' else: results_key = 'results' versions = [] while True: versions += [v['version'] for v in data[results_key]] next_link = data for path in pagination_path: next_link = next_link.get(path, {}) if not next_link: break elif relative_link: # TODO: This assumes the pagination result is relative to the root server. Will need to be verified # with someone who knows the AH API. next_link = versions_url.replace(versions_url_info.path, next_link) data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'), error_context_msg=error_context_msg, cache=True) self._set_cache() return versions
closed
ansible/ansible
https://github.com/ansible/ansible
74,191
ansible-galaxy doesn't handle rate limiting correctly
### Summary `ansible-galaxy collection install` makes a lot of requests to galaxy.ansible.com and can occasionally hit the rate limit. When that happens, galaxy.ansible.com will return a 520 http code and `ansible-galaxy` fails to download the collection and exits. Ideally when the client encounters a rate limiting http code (either 520, or 429), it should wait, slow down the request rate and try again rather than exiting. More information is available here: https://github.com/ansible/galaxy/issues/2429 ### Issue Type Bug Report ### Component Name ansible-galaxy ### Ansible Version ```console $ ansible --version root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible --version ansible 2.10.7 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ``` ### Configuration ```console $ ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ``` ### OS / Environment All ### Steps to Reproduce Run `ansible-galaxy collection install `amazon.aws`. If your internet is fast enough, this will occasionally fail when galaxy.ansible.com returns a 520 error code. ### Expected Results Collection should be installed. ### Actual Results ```console `ansible-galaxy` encounters a 429 or 520 http code and exits. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74191
https://github.com/ansible/ansible/pull/74240
51fd05e76b378f0ab463c71fa03bcf1b16eddc78
ee725846f070fc6b0dd79b5e8c5199ec652faf87
2021-04-08T15:20:23Z
python
2021-05-10T17:26:41Z
lib/ansible/module_utils/api.py
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright: (c) 2015, Brian Coca, <[email protected]> # # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) """ This module adds shared support for generic api modules In order to use this module, include it as part of a custom module as shown below. The 'api' module provides the following common argument specs: * rate limit spec - rate: number of requests per time unit (int) - rate_limit: time window in which the limit is applied in seconds * retry spec - retries: number of attempts - retry_pause: delay between attempts in seconds """ from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys import time def rate_limit_argument_spec(spec=None): """Creates an argument spec for working with rate limiting""" arg_spec = (dict( rate=dict(type='int'), rate_limit=dict(type='int'), )) if spec: arg_spec.update(spec) return arg_spec def retry_argument_spec(spec=None): """Creates an argument spec for working with retrying""" arg_spec = (dict( retries=dict(type='int'), retry_pause=dict(type='float', default=1), )) if spec: arg_spec.update(spec) return arg_spec def basic_auth_argument_spec(spec=None): arg_spec = (dict( api_username=dict(type='str'), api_password=dict(type='str', no_log=True), api_url=dict(type='str'), validate_certs=dict(type='bool', default=True) )) if spec: arg_spec.update(spec) return arg_spec def rate_limit(rate=None, rate_limit=None): """rate limiting decorator""" minrate = None if rate is not None and rate_limit is not None: minrate = float(rate_limit) / float(rate) def wrapper(f): last = [0.0] def ratelimited(*args, **kwargs): if sys.version_info >= (3, 8): real_time = time.process_time else: real_time = time.clock if minrate is not None: elapsed = real_time() - last[0] left = minrate - elapsed if left > 0: time.sleep(left) last[0] = real_time() ret = f(*args, **kwargs) return ret return ratelimited return wrapper def retry(retries=None, retry_pause=1): """Retry decorator""" def wrapper(f): def retried(*args, **kwargs): retry_count = 0 if retries is not None: ret = None while True: retry_count += 1 if retry_count >= retries: raise Exception("Retry limit exceeded: %d" % retries) try: ret = f(*args, **kwargs) except Exception: pass if ret: break time.sleep(retry_pause) return ret return retried return wrapper
closed
ansible/ansible
https://github.com/ansible/ansible
74,191
ansible-galaxy doesn't handle rate limiting correctly
### Summary `ansible-galaxy collection install` makes a lot of requests to galaxy.ansible.com and can occasionally hit the rate limit. When that happens, galaxy.ansible.com will return a 520 http code and `ansible-galaxy` fails to download the collection and exits. Ideally when the client encounters a rate limiting http code (either 520, or 429), it should wait, slow down the request rate and try again rather than exiting. More information is available here: https://github.com/ansible/galaxy/issues/2429 ### Issue Type Bug Report ### Component Name ansible-galaxy ### Ansible Version ```console $ ansible --version root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible --version ansible 2.10.7 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ``` ### Configuration ```console $ ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ``` ### OS / Environment All ### Steps to Reproduce Run `ansible-galaxy collection install `amazon.aws`. If your internet is fast enough, this will occasionally fail when galaxy.ansible.com returns a 520 error code. ### Expected Results Collection should be installed. ### Actual Results ```console `ansible-galaxy` encounters a 429 or 520 http code and exits. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74191
https://github.com/ansible/ansible/pull/74240
51fd05e76b378f0ab463c71fa03bcf1b16eddc78
ee725846f070fc6b0dd79b5e8c5199ec652faf87
2021-04-08T15:20:23Z
python
2021-05-10T17:26:41Z
test/units/galaxy/test_api.py
# -*- coding: utf-8 -*- # Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import os import re import pytest import stat import tarfile import tempfile import time from io import BytesIO, StringIO from units.compat.mock import MagicMock import ansible.constants as C from ansible import context from ansible.errors import AnsibleError from ansible.galaxy import api as galaxy_api from ansible.galaxy.api import CollectionVersionMetadata, GalaxyAPI, GalaxyError from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken from ansible.module_utils._text import to_native, to_text from ansible.module_utils.six.moves.urllib import error as urllib_error from ansible.utils import context_objects as co from ansible.utils.display import Display @pytest.fixture(autouse='function') def reset_cli_args(): co.GlobalCLIArgs._Singleton__instance = None # Required to initialise the GalaxyAPI object context.CLIARGS._store = {'ignore_certs': False} yield co.GlobalCLIArgs._Singleton__instance = None @pytest.fixture() def collection_artifact(tmp_path_factory): ''' Creates a collection artifact tarball that is ready to be published ''' output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output')) tar_path = os.path.join(output_dir, 'namespace-collection-v1.0.0.tar.gz') with tarfile.open(tar_path, 'w:gz') as tfile: b_io = BytesIO(b"\x00\x01\x02\x03") tar_info = tarfile.TarInfo('test') tar_info.size = 4 tar_info.mode = 0o0644 tfile.addfile(tarinfo=tar_info, fileobj=b_io) yield tar_path @pytest.fixture() def cache_dir(tmp_path_factory, monkeypatch): cache_dir = to_text(tmp_path_factory.mktemp('Test ÅÑŚÌβŁÈ Galaxy Cache')) monkeypatch.setitem(C.config._base_defs, 'GALAXY_CACHE_DIR', {'default': cache_dir}) yield cache_dir def get_test_galaxy_api(url, version, token_ins=None, token_value=None, no_cache=True): token_value = token_value or "my token" token_ins = token_ins or GalaxyToken(token_value) api = GalaxyAPI(None, "test", url, no_cache=no_cache) # Warning, this doesn't test g_connect() because _availabe_api_versions is set here. That means # that urls for v2 servers have to append '/api/' themselves in the input data. api._available_api_versions = {version: '%s' % version} api.token = token_ins return api def get_collection_versions(namespace='namespace', name='collection'): base_url = 'https://galaxy.server.com/api/v2/collections/{0}/{1}/'.format(namespace, name) versions_url = base_url + 'versions/' # Response for collection info responses = [ { "id": 1000, "href": base_url, "name": name, "namespace": { "id": 30000, "href": "https://galaxy.ansible.com/api/v1/namespaces/30000/", "name": namespace, }, "versions_url": versions_url, "latest_version": { "version": "1.0.5", "href": versions_url + "1.0.5/" }, "deprecated": False, "created": "2021-02-09T16:55:42.749915-05:00", "modified": "2021-02-09T16:55:42.749915-05:00", } ] # Paginated responses for versions page_versions = (('1.0.0', '1.0.1',), ('1.0.2', '1.0.3',), ('1.0.4', '1.0.5'),) last_page = None for page in range(1, len(page_versions) + 1): if page < len(page_versions): next_page = versions_url + '?page={0}'.format(page + 1) else: next_page = None version_results = [] for version in page_versions[int(page - 1)]: version_results.append( {'version': version, 'href': versions_url + '{0}/'.format(version)} ) responses.append( { 'count': 6, 'next': next_page, 'previous': last_page, 'results': version_results, } ) last_page = page return responses def test_api_no_auth(): api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = {} api._add_auth_token(actual, "") assert actual == {} def test_api_no_auth_but_required(): expected = "No access token or username set. A token can be set with --api-key or at " with pytest.raises(AnsibleError, match=expected): GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")._add_auth_token({}, "", required=True) def test_api_token_auth(): token = GalaxyToken(token=u"my_token") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Token my_token'} def test_api_token_auth_with_token_type(monkeypatch): token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", token_type="Bearer", required=True) assert actual == {'Authorization': 'Bearer my_token'} def test_api_token_auth_with_v3_url(monkeypatch): token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "https://galaxy.ansible.com/api/v3/resource/name", required=True) assert actual == {'Authorization': 'Bearer my_token'} def test_api_token_auth_with_v2_url(): token = GalaxyToken(token=u"my_token") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} # Add v3 to random part of URL but response should only see the v2 as the full URI path segment. api._add_auth_token(actual, "https://galaxy.ansible.com/api/v2/resourcev3/name", required=True) assert actual == {'Authorization': 'Token my_token'} def test_api_basic_auth_password(): token = BasicAuthToken(username=u"user", password=u"pass") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Basic dXNlcjpwYXNz'} def test_api_basic_auth_no_password(): token = BasicAuthToken(username=u"user") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Basic dXNlcjo='} def test_api_dont_override_auth_header(): api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = {'Authorization': 'Custom token'} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Custom token'} def test_initialise_galaxy(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/"}}'), StringIO(u'{"token":"my token"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = api.authenticate("github_token") assert len(api.available_api_versions) == 2 assert api.available_api_versions['v1'] == u'v1/' assert api.available_api_versions['v2'] == u'v2/' assert actual == {u'token': u'my token'} assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent'] assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/' assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent'] assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token' def test_initialise_galaxy_with_auth(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/"}}'), StringIO(u'{"token":"my token"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token')) actual = api.authenticate("github_token") assert len(api.available_api_versions) == 2 assert api.available_api_versions['v1'] == u'v1/' assert api.available_api_versions['v2'] == u'v2/' assert actual == {u'token': u'my token'} assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent'] assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/' assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent'] assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token' def test_initialise_automation_hub(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v2": "v2/", "v3":"v3/"}}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) assert len(api.available_api_versions) == 2 assert api.available_api_versions['v2'] == u'v2/' assert api.available_api_versions['v3'] == u'v3/' assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent'] assert mock_open.mock_calls[0][2]['headers'] == {'Authorization': 'Bearer my_token'} def test_initialise_unknown(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ urllib_error.HTTPError('https://galaxy.ansible.com/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')), urllib_error.HTTPError('https://galaxy.ansible.com/api/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token')) expected = "Error when finding available api versions from test (%s) (HTTP Code: 500, Message: msg)" \ % api.api_server with pytest.raises(AnsibleError, match=re.escape(expected)): api.authenticate("github_token") def test_get_available_api_versions(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/","v2":"v2/"}}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = api.available_api_versions assert len(actual) == 2 assert actual['v1'] == u'v1/' assert actual['v2'] == u'v2/' assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent'] def test_publish_collection_missing_file(): fake_path = u'/fake/ÅÑŚÌβŁÈ/path' expected = to_native("The collection path specified '%s' does not exist." % fake_path) api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2") with pytest.raises(AnsibleError, match=expected): api.publish_collection(fake_path) def test_publish_collection_not_a_tarball(): expected = "The collection path specified '{0}' is not a tarball, use 'ansible-galaxy collection build' to " \ "create a proper release artifact." api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2") with tempfile.NamedTemporaryFile(prefix=u'ÅÑŚÌβŁÈ') as temp_file: temp_file.write(b"\x00") temp_file.flush() with pytest.raises(AnsibleError, match=expected.format(to_native(temp_file.name))): api.publish_collection(temp_file.name) def test_publish_collection_unsupported_version(): expected = "Galaxy action publish_collection requires API versions 'v2, v3' but only 'v1' are available on test " \ "https://galaxy.ansible.com/api/" api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v1") with pytest.raises(AnsibleError, match=expected): api.publish_collection("path") @pytest.mark.parametrize('api_version, collection_url', [ ('v2', 'collections'), ('v3', 'artifacts/collections'), ]) def test_publish_collection(api_version, collection_url, collection_artifact, monkeypatch): api = get_test_galaxy_api("https://galaxy.ansible.com/api/", api_version) mock_call = MagicMock() mock_call.return_value = {'task': 'http://task.url/'} monkeypatch.setattr(api, '_call_galaxy', mock_call) actual = api.publish_collection(collection_artifact) assert actual == 'http://task.url/' assert mock_call.call_count == 1 assert mock_call.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/%s/%s/' % (api_version, collection_url) assert mock_call.mock_calls[0][2]['headers']['Content-length'] == len(mock_call.mock_calls[0][2]['args']) assert mock_call.mock_calls[0][2]['headers']['Content-type'].startswith( 'multipart/form-data; boundary=') assert mock_call.mock_calls[0][2]['args'].startswith(b'--') assert mock_call.mock_calls[0][2]['method'] == 'POST' assert mock_call.mock_calls[0][2]['auth_required'] is True @pytest.mark.parametrize('api_version, collection_url, response, expected', [ ('v2', 'collections', {}, 'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'), ('v2', 'collections', { 'message': u'Galaxy error messäge', 'code': 'GWE002', }, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Galaxy error messäge Code: GWE002)'), ('v3', 'artifact/collections', {}, 'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'), ('v3', 'artifact/collections', { 'errors': [ { 'code': 'conflict.collection_exists', 'detail': 'Collection "mynamespace-mycollection-4.1.1" already exists.', 'title': 'Conflict.', 'status': '400', }, { 'code': 'quantum_improbability', 'title': u'Rändom(?) quantum improbability.', 'source': {'parameter': 'the_arrow_of_time'}, 'meta': {'remediation': 'Try again before'}, }, ], }, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Collection ' u'"mynamespace-mycollection-4.1.1" already exists. Code: conflict.collection_exists), (HTTP Code: 500, ' u'Message: Rändom(?) quantum improbability. Code: quantum_improbability)') ]) def test_publish_failure(api_version, collection_url, response, expected, collection_artifact, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version) expected_url = '%s/api/%s/%s' % (api.api_server, api_version, collection_url) mock_open = MagicMock() mock_open.side_effect = urllib_error.HTTPError(expected_url, 500, 'msg', {}, StringIO(to_text(json.dumps(response)))) monkeypatch.setattr(galaxy_api, 'open_url', mock_open) with pytest.raises(GalaxyError, match=re.escape(to_native(expected % api.api_server))): api.publish_collection(collection_artifact) @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234/'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.return_value = StringIO(u'{"state":"success","finished_at":"time"}') monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234/'), ('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_multiple_requests(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"state":"test"}'), StringIO(u'{"state":"success","finished_at":"time"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) monkeypatch.setattr(time, 'sleep', MagicMock()) api.wait_import_task(import_uri) assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][1][0] == full_import_uri assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == \ 'Galaxy import process has a status of test, wait 2 seconds before trying again' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri,', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234/'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_with_failure(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'finished_at': 'some_time', 'state': 'failed', 'error': { 'code': 'GW001', 'description': u'Becäuse I said so!', }, 'messages': [ { 'level': 'error', 'message': u'Somé error', }, { 'level': 'warning', 'message': u'Some wärning', }, { 'level': 'info', 'message': u'Somé info', }, ], }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) mock_warn = MagicMock() monkeypatch.setattr(Display, 'warning', mock_warn) mock_err = MagicMock() monkeypatch.setattr(Display, 'error', mock_err) expected = to_native(u'Galaxy import process failed: Becäuse I said so! (Code: GW001)') with pytest.raises(AnsibleError, match=re.escape(expected)): api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info' assert mock_warn.call_count == 1 assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning' assert mock_err.call_count == 1 assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my_token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234/'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_with_failure_no_error(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'finished_at': 'some_time', 'state': 'failed', 'error': {}, 'messages': [ { 'level': 'error', 'message': u'Somé error', }, { 'level': 'warning', 'message': u'Some wärning', }, { 'level': 'info', 'message': u'Somé info', }, ], }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) mock_warn = MagicMock() monkeypatch.setattr(Display, 'warning', mock_warn) mock_err = MagicMock() monkeypatch.setattr(Display, 'error', mock_err) expected = 'Galaxy import process failed: Unknown error, see %s for more details \\(Code: UNKNOWN\\)' % full_import_uri with pytest.raises(AnsibleError, match=expected): api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info' assert mock_warn.call_count == 1 assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning' assert mock_err.call_count == 1 assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234/'), ('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_timeout(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) def return_response(*args, **kwargs): return StringIO(u'{"state":"waiting"}') mock_open = MagicMock() mock_open.side_effect = return_response monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) monkeypatch.setattr(time, 'sleep', MagicMock()) expected = "Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % full_import_uri with pytest.raises(AnsibleError, match=expected): api.wait_import_task(import_uri, 1) assert mock_open.call_count > 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][1][0] == full_import_uri assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri # expected_wait_msg = 'Galaxy import process has a status of waiting, wait {0} seconds before trying again' assert mock_vvv.call_count > 9 # 1st is opening Galaxy token file. # FIXME: # assert mock_vvv.mock_calls[1][1][0] == expected_wait_msg.format(2) # assert mock_vvv.mock_calls[2][1][0] == expected_wait_msg.format(3) # assert mock_vvv.mock_calls[3][1][0] == expected_wait_msg.format(4) # assert mock_vvv.mock_calls[4][1][0] == expected_wait_msg.format(6) # assert mock_vvv.mock_calls[5][1][0] == expected_wait_msg.format(10) # assert mock_vvv.mock_calls[6][1][0] == expected_wait_msg.format(15) # assert mock_vvv.mock_calls[7][1][0] == expected_wait_msg.format(22) # assert mock_vvv.mock_calls[8][1][0] == expected_wait_msg.format(30) @pytest.mark.parametrize('api_version, token_type, version, token_ins', [ ('v2', None, 'v2.1.13', None), ('v3', 'Bearer', 'v1.0.0', KeycloakToken(auth_url='https://api.test/api/automation-hub/')), ]) def test_get_collection_version_metadata_no_version(api_version, token_type, version, token_ins, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'download_url': 'https://downloadme.com', 'artifact': { 'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f', }, 'namespace': { 'name': 'namespace', }, 'collection': { 'name': 'collection', }, 'version': version, 'metadata': { 'dependencies': {}, } }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_version_metadata('namespace', 'collection', version) assert isinstance(actual, CollectionVersionMetadata) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.download_url == u'https://downloadme.com' assert actual.artifact_sha256 == u'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f' assert actual.version == version assert actual.dependencies == {} assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s/' \ % (api.api_server, api_version, version) # v2 calls dont need auth, so no authz header or token_type if token_type: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('api_version, token_type, token_ins, response', [ ('v2', None, None, { 'count': 2, 'next': None, 'previous': None, 'results': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }), # TODO: Verify this once Automation Hub is actually out ('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), { 'count': 2, 'next': None, 'previous': None, 'data': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }), ]) def test_get_collection_versions(api_version, token_type, token_ins, response, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps(response))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_versions('namespace', 'collection') assert actual == [u'1.0.0', u'1.0.1'] assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/' % api_version if token_ins: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('api_version, token_type, token_ins, responses', [ ('v2', None, None, [ { 'count': 6, 'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2', 'previous': None, 'results': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }, { 'count': 6, 'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=3', 'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions', 'results': [ { 'version': '1.0.2', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.2', }, { 'version': '1.0.3', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.3', }, ], }, { 'count': 6, 'next': None, 'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2', 'results': [ { 'version': '1.0.4', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.4', }, { 'version': '1.0.5', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.5', }, ], }, ]), ('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), [ { 'count': 6, 'links': { 'next': '/api/v3/collections/namespace/collection/versions/?page=2', 'previous': None, }, 'data': [ { 'version': '1.0.0', 'href': '/api/v3/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': '/api/v3/collections/namespace/collection/versions/1.0.1', }, ], }, { 'count': 6, 'links': { 'next': '/api/v3/collections/namespace/collection/versions/?page=3', 'previous': '/api/v3/collections/namespace/collection/versions', }, 'data': [ { 'version': '1.0.2', 'href': '/api/v3/collections/namespace/collection/versions/1.0.2', }, { 'version': '1.0.3', 'href': '/api/v3/collections/namespace/collection/versions/1.0.3', }, ], }, { 'count': 6, 'links': { 'next': None, 'previous': '/api/v3/collections/namespace/collection/versions/?page=2', }, 'data': [ { 'version': '1.0.4', 'href': '/api/v3/collections/namespace/collection/versions/1.0.4', }, { 'version': '1.0.5', 'href': '/api/v3/collections/namespace/collection/versions/1.0.5', }, ], }, ]), ]) def test_get_collection_versions_pagination(api_version, token_type, token_ins, responses, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_versions('namespace', 'collection') assert actual == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5'] assert mock_open.call_count == 3 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/' % api_version assert mock_open.mock_calls[1][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/?page=2' % api_version assert mock_open.mock_calls[2][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/?page=3' % api_version if token_type: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[2][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('responses', [ [ { 'count': 2, 'results': [{'name': '3.5.1', }, {'name': '3.5.2'}], 'next_link': None, 'next': None, 'previous_link': None, 'previous': None }, ], [ { 'count': 2, 'results': [{'name': '3.5.1'}], 'next_link': '/api/v1/roles/432/versions/?page=2&page_size=50', 'next': '/roles/432/versions/?page=2&page_size=50', 'previous_link': None, 'previous': None }, { 'count': 2, 'results': [{'name': '3.5.2'}], 'next_link': None, 'next': None, 'previous_link': '/api/v1/roles/432/versions/?&page_size=50', 'previous': '/roles/432/versions/?page_size=50', }, ] ]) def test_get_role_versions_pagination(monkeypatch, responses): api = get_test_galaxy_api('https://galaxy.com/api/', 'v1') mock_open = MagicMock() mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.fetch_role_related('versions', 432) assert actual == [{'name': '3.5.1'}, {'name': '3.5.2'}] assert mock_open.call_count == len(responses) assert mock_open.mock_calls[0][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page_size=50' if len(responses) == 2: assert mock_open.mock_calls[1][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page=2&page_size=50' def test_missing_cache_dir(cache_dir): os.rmdir(cache_dir) GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False) assert os.path.isdir(cache_dir) assert stat.S_IMODE(os.stat(cache_dir).st_mode) == 0o700 cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == '{"version": 1}' assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o600 def test_existing_cache(cache_dir): cache_file = os.path.join(cache_dir, 'api.json') cache_file_contents = '{"version": 1, "test": "json"}' with open(cache_file, mode='w') as fd: fd.write(cache_file_contents) os.chmod(cache_file, 0o655) GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False) assert os.path.isdir(cache_dir) with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == cache_file_contents assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o655 @pytest.mark.parametrize('content', [ '', 'value', '{"de" "finit" "ely" [\'invalid"]}', '[]', '{"version": 2, "test": "json"}', '{"version": 2, "key": "ÅÑŚÌβŁÈ"}', ]) def test_cache_invalid_cache_content(content, cache_dir): cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file, mode='w') as fd: fd.write(content) os.chmod(cache_file, 0o664) GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False) with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == '{"version": 1}' assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o664 def test_cache_complete_pagination(cache_dir, monkeypatch): responses = get_collection_versions() cache_file = os.path.join(cache_dir, 'api.json') api = get_test_galaxy_api('https://galaxy.server.com/api/', 'v2', no_cache=False) mock_open = MagicMock( side_effect=[ StringIO(to_text(json.dumps(r))) for r in responses ] ) monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual_versions = api.get_collection_versions('namespace', 'collection') assert actual_versions == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5'] with open(cache_file) as fd: final_cache = json.loads(fd.read()) cached_server = final_cache['galaxy.server.com:'] cached_collection = cached_server['/api/v2/collections/namespace/collection/versions/'] cached_versions = [r['version'] for r in cached_collection['results']] assert final_cache == api._cache assert cached_versions == actual_versions def test_cache_flaky_pagination(cache_dir, monkeypatch): responses = get_collection_versions() cache_file = os.path.join(cache_dir, 'api.json') api = get_test_galaxy_api('https://galaxy.server.com/api/', 'v2', no_cache=False) # First attempt, fail midway through mock_open = MagicMock( side_effect=[ StringIO(to_text(json.dumps(responses[0]))), StringIO(to_text(json.dumps(responses[1]))), urllib_error.HTTPError(responses[1]['next'], 500, 'Error', {}, StringIO()), StringIO(to_text(json.dumps(responses[3]))), ] ) monkeypatch.setattr(galaxy_api, 'open_url', mock_open) expected = ( r'Error when getting available collection versions for namespace\.collection ' r'from test \(https://galaxy\.server\.com/api/\) ' r'\(HTTP Code: 500, Message: Error Code: Unknown\)' ) with pytest.raises(GalaxyError, match=expected): api.get_collection_versions('namespace', 'collection') with open(cache_file) as fd: final_cache = json.loads(fd.read()) assert final_cache == { 'version': 1, 'galaxy.server.com:': { 'modified': { 'namespace.collection': responses[0]['modified'] } } } # Reset API api = get_test_galaxy_api('https://galaxy.server.com/api/', 'v2', no_cache=False) # Second attempt is successful so cache should be populated mock_open = MagicMock( side_effect=[ StringIO(to_text(json.dumps(r))) for r in responses ] ) monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual_versions = api.get_collection_versions('namespace', 'collection') assert actual_versions == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5'] with open(cache_file) as fd: final_cache = json.loads(fd.read()) cached_server = final_cache['galaxy.server.com:'] cached_collection = cached_server['/api/v2/collections/namespace/collection/versions/'] cached_versions = [r['version'] for r in cached_collection['results']] assert cached_versions == actual_versions def test_world_writable_cache(cache_dir, monkeypatch): mock_warning = MagicMock() monkeypatch.setattr(Display, 'warning', mock_warning) cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file, mode='w') as fd: fd.write('{"version": 2}') os.chmod(cache_file, 0o666) api = GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', no_cache=False) assert api._cache is None with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == '{"version": 2}' assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o666 assert mock_warning.call_count == 1 assert mock_warning.call_args[0][0] == \ 'Galaxy cache has world writable access (%s), ignoring it as a cache source.' % cache_file def test_no_cache(cache_dir): cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file, mode='w') as fd: fd.write('random') api = GalaxyAPI(None, "test", 'https://galaxy.ansible.com/') assert api._cache is None with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == 'random' def test_clear_cache_with_no_cache(cache_dir): cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file, mode='w') as fd: fd.write('{"version": 1, "key": "value"}') GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', clear_response_cache=True) assert not os.path.exists(cache_file) def test_clear_cache(cache_dir): cache_file = os.path.join(cache_dir, 'api.json') with open(cache_file, mode='w') as fd: fd.write('{"version": 1, "key": "value"}') GalaxyAPI(None, "test", 'https://galaxy.ansible.com/', clear_response_cache=True, no_cache=False) with open(cache_file) as fd: actual_cache = fd.read() assert actual_cache == '{"version": 1}' assert stat.S_IMODE(os.stat(cache_file).st_mode) == 0o600 @pytest.mark.parametrize(['url', 'expected'], [ ('http://hostname/path', 'hostname:'), ('http://hostname:80/path', 'hostname:80'), ('https://testing.com:invalid', 'testing.com:'), ('https://testing.com:1234', 'testing.com:1234'), ('https://username:[email protected]/path', 'testing.com:'), ('https://username:[email protected]:443/path', 'testing.com:443'), ]) def test_cache_id(url, expected): actual = galaxy_api.get_cache_id(url) assert actual == expected
closed
ansible/ansible
https://github.com/ansible/ansible
74,191
ansible-galaxy doesn't handle rate limiting correctly
### Summary `ansible-galaxy collection install` makes a lot of requests to galaxy.ansible.com and can occasionally hit the rate limit. When that happens, galaxy.ansible.com will return a 520 http code and `ansible-galaxy` fails to download the collection and exits. Ideally when the client encounters a rate limiting http code (either 520, or 429), it should wait, slow down the request rate and try again rather than exiting. More information is available here: https://github.com/ansible/galaxy/issues/2429 ### Issue Type Bug Report ### Component Name ansible-galaxy ### Ansible Version ```console $ ansible --version root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible --version ansible 2.10.7 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ``` ### Configuration ```console $ ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ansible-config dump --only-changed root@ubuntu-s-1vcpu-1gb-nyc3-01:~# ``` ### OS / Environment All ### Steps to Reproduce Run `ansible-galaxy collection install `amazon.aws`. If your internet is fast enough, this will occasionally fail when galaxy.ansible.com returns a 520 error code. ### Expected Results Collection should be installed. ### Actual Results ```console `ansible-galaxy` encounters a 429 or 520 http code and exits. ``` ### Code of Conduct I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74191
https://github.com/ansible/ansible/pull/74240
51fd05e76b378f0ab463c71fa03bcf1b16eddc78
ee725846f070fc6b0dd79b5e8c5199ec652faf87
2021-04-08T15:20:23Z
python
2021-05-10T17:26:41Z
test/units/module_utils/test_api.py
# -*- coding: utf-8 -*- # Copyright: (c) 2020, Abhijeet Kasurde <[email protected]> # Copyright: (c) 2020, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible.module_utils.api import rate_limit, retry import pytest class TestRateLimit: def test_ratelimit(self): @rate_limit(rate=1, rate_limit=1) def login_database(): return "success" r = login_database() assert r == 'success' class TestRetry: def test_no_retry_required(self): self.counter = 0 @retry(retries=4, retry_pause=2) def login_database(): self.counter += 1 return 'success' r = login_database() assert r == 'success' assert self.counter == 1 def test_catch_exception(self): @retry(retries=1) def login_database(): return 'success' with pytest.raises(Exception): login_database()
closed
ansible/ansible
https://github.com/ansible/ansible
74,575
ad-hoc ansible does not document format of MODULE_ARGS
### Summary Page https://docs.ansible.com/ansible/latest/cli/ansible.html and also `ansible --help` says this: ``` -a <MODULE_ARGS>, --args <MODULE_ARGS> module arguments ``` It does not say anything about format <MODULE_ARGS> and thus is not useful to a causal user. I've learned how to use this parametr only from https://gist.github.com/YumaInaura/06e080e3f807338c122837bdb2d34571 ### Issue Type Documentation Report ### Component Name https://docs.ansible.com/ansible/latest/cli/ansible.html ### Ansible Version ```console $ ansible --version ansible 2.10.8 ``` ### Configuration ```console Not applicable. ``` ### OS / Environment Not applicable. ### Additional Information BTW I could not find where CLI docs are stored and/or generated, which might potentially help me or other people to submit simple MRs with fixes for similarly simple doc issues. I've looked into https://docs.ansible.com/ansible/latest/dev_guide/index.html and searched for all instances of "doc" but none of occurrences refer to CLI tools. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74575
https://github.com/ansible/ansible/pull/74616
ee725846f070fc6b0dd79b5e8c5199ec652faf87
7f7d3067e3bf69be35d056d44e06981ff1a55a4d
2021-05-05T14:36:15Z
python
2021-05-10T17:28:59Z
changelogs/fragments/adhoc_help_clarify.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,575
ad-hoc ansible does not document format of MODULE_ARGS
### Summary Page https://docs.ansible.com/ansible/latest/cli/ansible.html and also `ansible --help` says this: ``` -a <MODULE_ARGS>, --args <MODULE_ARGS> module arguments ``` It does not say anything about format <MODULE_ARGS> and thus is not useful to a causal user. I've learned how to use this parametr only from https://gist.github.com/YumaInaura/06e080e3f807338c122837bdb2d34571 ### Issue Type Documentation Report ### Component Name https://docs.ansible.com/ansible/latest/cli/ansible.html ### Ansible Version ```console $ ansible --version ansible 2.10.8 ``` ### Configuration ```console Not applicable. ``` ### OS / Environment Not applicable. ### Additional Information BTW I could not find where CLI docs are stored and/or generated, which might potentially help me or other people to submit simple MRs with fixes for similarly simple doc issues. I've looked into https://docs.ansible.com/ansible/latest/dev_guide/index.html and searched for all instances of "doc" but none of occurrences refer to CLI tools. ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74575
https://github.com/ansible/ansible/pull/74616
ee725846f070fc6b0dd79b5e8c5199ec652faf87
7f7d3067e3bf69be35d056d44e06981ff1a55a4d
2021-05-05T14:36:15Z
python
2021-05-10T17:28:59Z
lib/ansible/cli/adhoc.py
# Copyright: (c) 2012, Michael DeHaan <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible import constants as C from ansible import context from ansible.cli import CLI from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.executor.task_queue_manager import TaskQueueManager from ansible.module_utils._text import to_text from ansible.parsing.splitter import parse_kv from ansible.playbook import Playbook from ansible.playbook.play import Play from ansible.utils.display import Display display = Display() class AdHocCLI(CLI): ''' is an extra-simple tool/framework/API for doing 'remote things'. this command allows you to define and run a single task 'playbook' against a set of hosts ''' def init_parser(self): ''' create an options parser for bin/ansible ''' super(AdHocCLI, self).init_parser(usage='%prog <host-pattern> [options]', desc="Define and run a single task 'playbook' against" " a set of hosts", epilog="Some modules do not make sense in Ad-Hoc (include," " meta, etc)") opt_help.add_runas_options(self.parser) opt_help.add_inventory_options(self.parser) opt_help.add_async_options(self.parser) opt_help.add_output_options(self.parser) opt_help.add_connect_options(self.parser) opt_help.add_check_options(self.parser) opt_help.add_runtask_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_fork_options(self.parser) opt_help.add_module_options(self.parser) opt_help.add_basedir_options(self.parser) opt_help.add_tasknoplay_options(self.parser) # options unique to ansible ad-hoc self.parser.add_argument('-a', '--args', dest='module_args', help="module arguments", default=C.DEFAULT_MODULE_ARGS) self.parser.add_argument('-m', '--module-name', dest='module_name', help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME, default=C.DEFAULT_MODULE_NAME) self.parser.add_argument('args', metavar='pattern', help='host pattern') def post_process_args(self, options): '''Post process and validate options for bin/ansible ''' options = super(AdHocCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options, runas_opts=True, fork_opts=True) return options def _play_ds(self, pattern, async_val, poll): check_raw = context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS mytask = {'action': {'module': context.CLIARGS['module_name'], 'args': parse_kv(context.CLIARGS['module_args'], check_raw=check_raw)}, 'timeout': context.CLIARGS['task_timeout']} # avoid adding to tasks that don't support it, unless set, then give user an error if context.CLIARGS['module_name'] not in C._ACTION_ALL_INCLUDE_ROLE_TASKS and any(frozenset((async_val, poll))): mytask['async_val'] = async_val mytask['poll'] = poll return dict( name="Ansible Ad-Hoc", hosts=pattern, gather_facts='no', tasks=[mytask]) def run(self): ''' create and execute the single task playbook ''' super(AdHocCLI, self).run() # only thing left should be host pattern pattern = to_text(context.CLIARGS['args'], errors='surrogate_or_strict') sshpass = None becomepass = None (sshpass, becomepass) = self.ask_passwords() passwords = {'conn_pass': sshpass, 'become_pass': becomepass} # get basic objects loader, inventory, variable_manager = self._play_prereqs() try: hosts = self.get_host_list(inventory, context.CLIARGS['subset'], pattern) except AnsibleError: if context.CLIARGS['subset']: raise else: hosts = [] display.warning("No hosts matched, nothing to do") if context.CLIARGS['listhosts']: display.display(' hosts (%d):' % len(hosts)) for host in hosts: display.display(' %s' % host) return 0 if context.CLIARGS['module_name'] in C.MODULE_REQUIRE_ARGS and not context.CLIARGS['module_args']: err = "No argument passed to %s module" % context.CLIARGS['module_name'] if pattern.endswith(".yml"): err = err + ' (did you mean to run ansible-playbook?)' raise AnsibleOptionsError(err) # Avoid modules that don't work with ad-hoc if context.CLIARGS['module_name'] in C._ACTION_IMPORT_PLAYBOOK: raise AnsibleOptionsError("'%s' is not a valid action for ad-hoc commands" % context.CLIARGS['module_name']) play_ds = self._play_ds(pattern, context.CLIARGS['seconds'], context.CLIARGS['poll_interval']) play = Play().load(play_ds, variable_manager=variable_manager, loader=loader) # used in start callback playbook = Playbook(loader) playbook._entries.append(play) playbook._file_name = '__adhoc_playbook__' if self.callback: cb = self.callback elif context.CLIARGS['one_line']: cb = 'oneline' # Respect custom 'stdout_callback' only with enabled 'bin_ansible_callbacks' elif C.DEFAULT_LOAD_CALLBACK_PLUGINS and C.DEFAULT_STDOUT_CALLBACK != 'default': cb = C.DEFAULT_STDOUT_CALLBACK else: cb = 'minimal' run_tree = False if context.CLIARGS['tree']: C.CALLBACKS_ENABLED.append('tree') C.TREE_DIR = context.CLIARGS['tree'] run_tree = True # now create a task queue manager to execute the play self._tqm = None try: self._tqm = TaskQueueManager( inventory=inventory, variable_manager=variable_manager, loader=loader, passwords=passwords, stdout_callback=cb, run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS, run_tree=run_tree, forks=context.CLIARGS['forks'], ) self._tqm.load_callbacks() self._tqm.send_callback('v2_playbook_on_start', playbook) result = self._tqm.run(play) self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats) finally: if self._tqm: self._tqm.cleanup() if loader: loader.cleanup_all_tmp_files() return result
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
changelogs/fragments/74625-fix-ansible_play_batch-between-plays.yml
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
lib/ansible/executor/task_queue_manager.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import sys import tempfile import threading import time import multiprocessing.queues from ansible import constants as C from ansible import context from ansible.errors import AnsibleError from ansible.executor.play_iterator import PlayIterator from ansible.executor.stats import AggregateStats from ansible.executor.task_result import TaskResult from ansible.module_utils.six import PY3, string_types from ansible.module_utils._text import to_text, to_native from ansible.playbook.play_context import PlayContext from ansible.playbook.task import Task from ansible.plugins.loader import callback_loader, strategy_loader, module_loader from ansible.plugins.callback import CallbackBase from ansible.template import Templar from ansible.vars.hostvars import HostVars from ansible.vars.reserved import warn_if_reserved from ansible.utils.display import Display from ansible.utils.lock import lock_decorator from ansible.utils.multiprocessing import context as multiprocessing_context __all__ = ['TaskQueueManager'] display = Display() class CallbackSend: def __init__(self, method_name, *args, **kwargs): self.method_name = method_name self.args = args self.kwargs = kwargs class FinalQueue(multiprocessing.queues.Queue): def __init__(self, *args, **kwargs): if PY3: kwargs['ctx'] = multiprocessing_context super(FinalQueue, self).__init__(*args, **kwargs) def send_callback(self, method_name, *args, **kwargs): self.put( CallbackSend(method_name, *args, **kwargs), block=False ) def send_task_result(self, *args, **kwargs): if isinstance(args[0], TaskResult): tr = args[0] else: tr = TaskResult(*args, **kwargs) self.put( tr, block=False ) class TaskQueueManager: ''' This class handles the multiprocessing requirements of Ansible by creating a pool of worker forks, a result handler fork, and a manager object with shared datastructures/queues for coordinating work between all processes. The queue manager is responsible for loading the play strategy plugin, which dispatches the Play's tasks to hosts. ''' RUN_OK = 0 RUN_ERROR = 1 RUN_FAILED_HOSTS = 2 RUN_UNREACHABLE_HOSTS = 4 RUN_FAILED_BREAK_PLAY = 8 RUN_UNKNOWN_ERROR = 255 def __init__(self, inventory, variable_manager, loader, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False, forks=None): self._inventory = inventory self._variable_manager = variable_manager self._loader = loader self._stats = AggregateStats() self.passwords = passwords self._stdout_callback = stdout_callback self._run_additional_callbacks = run_additional_callbacks self._run_tree = run_tree self._forks = forks or 5 self._callbacks_loaded = False self._callback_plugins = [] self._start_at_done = False # make sure any module paths (if specified) are added to the module_loader if context.CLIARGS.get('module_path', False): for path in context.CLIARGS['module_path']: if path: module_loader.add_directory(path) # a special flag to help us exit cleanly self._terminated = False # dictionaries to keep track of failed/unreachable hosts self._failed_hosts = dict() self._unreachable_hosts = dict() try: self._final_q = FinalQueue() except OSError as e: raise AnsibleError("Unable to use multiprocessing, this is normally caused by lack of access to /dev/shm: %s" % to_native(e)) self._callback_lock = threading.Lock() # A temporary file (opened pre-fork) used by connection # plugins for inter-process locking. self._connection_lockfile = tempfile.TemporaryFile() def _initialize_processes(self, num): self._workers = [] for i in range(num): self._workers.append(None) def load_callbacks(self): ''' Loads all available callbacks, with the exception of those which utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout', only one such callback plugin will be loaded. ''' if self._callbacks_loaded: return stdout_callback_loaded = False if self._stdout_callback is None: self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK if isinstance(self._stdout_callback, CallbackBase): stdout_callback_loaded = True elif isinstance(self._stdout_callback, string_types): if self._stdout_callback not in callback_loader: raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback) else: self._stdout_callback = callback_loader.get(self._stdout_callback) self._stdout_callback.set_options() stdout_callback_loaded = True else: raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin") # get all configured loadable callbacks (adjacent, builtin) callback_list = list(callback_loader.all(class_only=True)) # add enabled callbacks that refer to collections, which might not appear in normal listing for c in C.CALLBACKS_ENABLED: # load all, as collection ones might be using short/redirected names and not a fqcn plugin = callback_loader.get(c, class_only=True) # TODO: check if this skip is redundant, loader should handle bad file/plugin cases already if plugin: # avoids incorrect and dupes possible due to collections if plugin not in callback_list: callback_list.append(plugin) else: display.warning("Skipping callback plugin '%s', unable to load" % c) # for each callback in the list see if we should add it to 'active callbacks' used in the play for callback_plugin in callback_list: callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', '') callback_needs_enabled = getattr(callback_plugin, 'CALLBACK_NEEDS_ENABLED', getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)) # try to get colleciotn world name first cnames = getattr(callback_plugin, '_redirected_names', []) if cnames: # store the name the plugin was loaded as, as that's what we'll need to compare to the configured callback list later callback_name = cnames[0] else: # fallback to 'old loader name' (callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path)) display.vvvvv("Attempting to use '%s' callback." % (callback_name)) if callback_type == 'stdout': # we only allow one callback of type 'stdout' to be loaded, if callback_name != self._stdout_callback or stdout_callback_loaded: display.vv("Skipping callback '%s', as we already have a stdout callback." % (callback_name)) continue stdout_callback_loaded = True elif callback_name == 'tree' and self._run_tree: # TODO: remove special case for tree, which is an adhoc cli option --tree pass elif not self._run_additional_callbacks or (callback_needs_enabled and ( # only run if not adhoc, or adhoc was specifically configured to run + check enabled list C.CALLBACKS_ENABLED is None or callback_name not in C.CALLBACKS_ENABLED)): # 2.x plugins shipped with ansible should require enabling, older or non shipped should load automatically continue try: callback_obj = callback_plugin() # avoid bad plugin not returning an object, only needed cause we do class_only load and bypass loader checks, # really a bug in the plugin itself which we ignore as callback errors are not supposed to be fatal. if callback_obj: # skip initializing if we already did the work for the same plugin (even with diff names) if callback_obj not in self._callback_plugins: callback_obj.set_options() self._callback_plugins.append(callback_obj) else: display.vv("Skipping callback '%s', already loaded as '%s'." % (callback_plugin, callback_name)) else: display.warning("Skipping callback '%s', as it does not create a valid plugin instance." % callback_name) continue except Exception as e: display.warning("Skipping callback '%s', unable to load due to: %s" % (callback_name, to_native(e))) continue self._callbacks_loaded = True def run(self, play): ''' Iterates over the roles/tasks in a play, using the given (or default) strategy for queueing tasks. The default is the linear strategy, which operates like classic Ansible by keeping all hosts in lock-step with a given task (meaning no hosts move on to the next task until all hosts are done with the current task). ''' if not self._callbacks_loaded: self.load_callbacks() all_vars = self._variable_manager.get_vars(play=play) templar = Templar(loader=self._loader, variables=all_vars) warn_if_reserved(all_vars, templar.environment.globals.keys()) new_play = play.copy() new_play.post_validate(templar) new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers self.hostvars = HostVars( inventory=self._inventory, variable_manager=self._variable_manager, loader=self._loader, ) play_context = PlayContext(new_play, self.passwords, self._connection_lockfile.fileno()) if (self._stdout_callback and hasattr(self._stdout_callback, 'set_play_context')): self._stdout_callback.set_play_context(play_context) for callback_plugin in self._callback_plugins: if hasattr(callback_plugin, 'set_play_context'): callback_plugin.set_play_context(play_context) self.send_callback('v2_playbook_on_play_start', new_play) # build the iterator iterator = PlayIterator( inventory=self._inventory, play=new_play, play_context=play_context, variable_manager=self._variable_manager, all_vars=all_vars, start_at_done=self._start_at_done, ) # adjust to # of workers to configured forks or size of batch, whatever is lower self._initialize_processes(min(self._forks, iterator.batch_size)) # load the specified strategy (or the default linear one) strategy = strategy_loader.get(new_play.strategy, self) if strategy is None: raise AnsibleError("Invalid play strategy specified: %s" % new_play.strategy, obj=play._ds) # Because the TQM may survive multiple play runs, we start by marking # any hosts as failed in the iterator here which may have been marked # as failed in previous runs. Then we clear the internal list of failed # hosts so we know what failed this round. for host_name in self._failed_hosts.keys(): host = self._inventory.get_host(host_name) iterator.mark_host_failed(host) self.clear_failed_hosts() # during initialization, the PlayContext will clear the start_at_task # field to signal that a matching task was found, so check that here # and remember it so we don't try to skip tasks on future plays if context.CLIARGS.get('start_at_task') is not None and play_context.start_at_task is None: self._start_at_done = True # and run the play using the strategy and cleanup on way out try: play_return = strategy.run(iterator, play_context) finally: strategy.cleanup() self._cleanup_processes() # now re-save the hosts that failed from the iterator to our internal list for host_name in iterator.get_failed_hosts(): self._failed_hosts[host_name] = True return play_return def cleanup(self): display.debug("RUNNING CLEANUP") self.terminate() self._final_q.close() self._cleanup_processes() # A bug exists in Python 2.6 that causes an exception to be raised during # interpreter shutdown. This is only an issue in our CI testing but we # hit it frequently enough to add a small sleep to avoid the issue. # This can be removed once we have split controller available in CI. # # Further information: # Issue: https://bugs.python.org/issue4106 # Fix: https://hg.python.org/cpython/rev/d316315a8781 # try: if (2, 6) == (sys.version_info[0:2]): time.sleep(0.0001) except (IndexError, AttributeError): # In case there is an issue getting the version info, don't raise an Exception pass def _cleanup_processes(self): if hasattr(self, '_workers'): for attempts_remaining in range(C.WORKER_SHUTDOWN_POLL_COUNT - 1, -1, -1): if not any(worker_prc and worker_prc.is_alive() for worker_prc in self._workers): break if attempts_remaining: time.sleep(C.WORKER_SHUTDOWN_POLL_DELAY) else: display.warning('One or more worker processes are still running and will be terminated.') for worker_prc in self._workers: if worker_prc and worker_prc.is_alive(): try: worker_prc.terminate() except AttributeError: pass def clear_failed_hosts(self): self._failed_hosts = dict() def get_inventory(self): return self._inventory def get_variable_manager(self): return self._variable_manager def get_loader(self): return self._loader def get_workers(self): return self._workers[:] def terminate(self): self._terminated = True def has_dead_workers(self): # [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>, # <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])> defunct = False for x in self._workers: if getattr(x, 'exitcode', None): defunct = True return defunct @lock_decorator(attr='_callback_lock') def send_callback(self, method_name, *args, **kwargs): for callback_plugin in [self._stdout_callback] + self._callback_plugins: # a plugin that set self.disabled to True will not be called # see osx_say.py example for such a plugin if getattr(callback_plugin, 'disabled', False): continue # a plugin can opt in to implicit tasks (such as meta). It does this # by declaring self.wants_implicit_tasks = True. wants_implicit_tasks = getattr(callback_plugin, 'wants_implicit_tasks', False) # try to find v2 method, fallback to v1 method, ignore callback if no method found methods = [] for possible in [method_name, 'v2_on_any']: gotit = getattr(callback_plugin, possible, None) if gotit is None: gotit = getattr(callback_plugin, possible.replace('v2_', ''), None) if gotit is not None: methods.append(gotit) # send clean copies new_args = [] # If we end up being given an implicit task, we'll set this flag in # the loop below. If the plugin doesn't care about those, then we # check and continue to the next iteration of the outer loop. is_implicit_task = False for arg in args: # FIXME: add play/task cleaners if isinstance(arg, TaskResult): new_args.append(arg.clean_copy()) # elif isinstance(arg, Play): # elif isinstance(arg, Task): else: new_args.append(arg) if isinstance(arg, Task) and arg.implicit: is_implicit_task = True if is_implicit_task and not wants_implicit_tasks: continue for method in methods: try: method(*new_args, **kwargs) except Exception as e: # TODO: add config toggle to make this fatal or not? display.warning(u"Failure using method (%s) in callback plugin (%s): %s" % (to_text(method_name), to_text(callback_plugin), to_text(e))) from traceback import format_tb from sys import exc_info display.vvv('Callback Exception: \n' + ' '.join(format_tb(exc_info()[2])))
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
test/integration/targets/special_vars_hosts/aliases
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
test/integration/targets/special_vars_hosts/inventory
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
test/integration/targets/special_vars_hosts/playbook.yml
closed
ansible/ansible
https://github.com/ansible/ansible
66,945
ansible_play_batch variable is including unreachable hosts
##### SUMMARY ansible_play_batch variable is including unreachable hosts ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_play_batch ##### ANSIBLE VERSION ```paste below ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/[removed]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /[removed]/.local/lib/python3.5/site-packages/ansible executable location = /[removed]/.local/bin/ansible python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ```paste below DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 25 DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory'] DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = ignore ``` ##### OS / ENVIRONMENT ubuntu 16.04 ##### STEPS TO REPRODUCE When evaluating ansible_play_batch using jinja in a playbook, ansible_play_batch returns an unreachable hosts, causing the entire play example below to fail (due to run_once). To reproduce, target two machines, and reboot one machine mid-play (causing an unreachable on that machine). <!--- Paste example playbooks or commands between quotes below --> ```yaml - set_fact: emailBody: | {% for item in ansible_play_batch -%} {% if hostvars[item].kernelVersion != hostvars[item].currentKernel.stdout %} {{ hostvars[item].inventory_hostname + ' - ' + hostvars[item].currentKernel.stdout + ' - ' + hostvars[item].kernelVersion }} {% endif %} {%- endfor %} delegate_to: 127.0.0.1 run_once: True ``` ##### EXPECTED RESULTS ansible_play_batch only includes all non-failed/non-unreachable hosts per documentation found at https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html ##### ACTUAL RESULTS ansible_play_batch is including the rebooted (unreachable) machine. <!--- Paste verbatim command output between quotes --> ```paste below "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'kernelVersion' ```
https://github.com/ansible/ansible/issues/66945
https://github.com/ansible/ansible/pull/74625
df6554c4ec8b1256067bc2510134ac49cfc3003c
cf11c38cafc88ec301f48f9673ec1f554e82a589
2020-01-30T13:55:13Z
python
2021-05-11T15:12:34Z
test/integration/targets/special_vars_hosts/runme.sh
closed
ansible/ansible
https://github.com/ansible/ansible
70,740
Against the documentation password_hash doesn't depend on passlib in all use cases
##### SUMMARY The [documentation](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#hashing-filters) creates the impression that passlib is necessarily needed for using password_hash. But depending on the used algorithms or OS it also works when only the python crypt module is present. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME playbooks_filters ##### ANSIBLE VERSION ``` ansible 2.9.6 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] ``` ##### CONFIGURATION ``` ``` ##### OS / ENVIRONMENT Linux (Mint 20.04) ##### ADDITIONAL INFORMATION From the documentation: > ... password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html). I am using Linux Mint 20.04 and configured my host to use python3. I didn't install passlib and I am able to create user passwords with `password_hash('sha512')`. So against the documentation password_hash doesn't depend on passlib for all algorithms. Could you please describe this in the documentation more detailed?
https://github.com/ansible/ansible/issues/70740
https://github.com/ansible/ansible/pull/74640
ddaa539ab115df10cfc34682049f40b7907b95f3
79e12ba98ef9d329bc416d1ca8a309b9194cf239
2020-07-19T18:10:55Z
python
2021-05-11T15:39:08Z
docs/docsite/rst/user_guide/playbooks_filters.rst
.. _playbooks_filters: ******************************** Using filters to manipulate data ******************************** Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them. Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally. .. contents:: :local: Handling undefined variables ============================ Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter. .. _defaulting_undefined_variables: Providing default values ------------------------ You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:: {{ some_variable | default(5) }} In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role. Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined. If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:: {{ lookup('env', 'MY_USER') | default('admin', true) }} .. _omitting_undefined_variables: Making variables optional ------------------------- By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:: - name: Touch files with an optional mode ansible.builtin.file: dest: "{{ item.path }}" state: touch mode: "{{ item.mode | default(omit) }}" loop: - path: /tmp/foo - path: /tmp/bar - path: /tmp/baz mode: "0444" In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option. .. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this: ``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this. .. _forcing_variables_to_be_defined: Defining mandatory values ------------------------- If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:: {{ variable | mandatory }} The variable value will be used as is, but the template evaluation will raise an error if it is undefined. Defining different values for true/false/null (ternary) ======================================================= You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):: {{ (status == 'needs_restart') | ternary('restart', 'continue') }} In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):: {{ enabled | ternary('no shutdown', 'shutdown', omit) }} Managing data types =================== You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type. Discovering the data type ------------------------- .. versionadded:: 2.3 If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:: {{ myvar | type_debug }} .. _dict_filter: Transforming dictionaries into lists ------------------------------------ .. versionadded:: 2.6 Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:: {{ dict | dict2items }} Dictionary data (before applying the ``dict2items`` filter):: tags: Application: payment Environment: dev List data (after applying the ``dict2items`` filter):: - key: Application value: payment - key: Environment value: dev .. versionadded:: 2.8 The ``dict2items`` filter is the reverse of the ``items2dict`` filter. If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:: {{ files | dict2items(key_name='file', value_name='path') }} Dictionary data (before applying the ``dict2items`` filter):: files: users: /etc/passwd groups: /etc/group List data (after applying the ``dict2items`` filter):: - file: users path: /etc/passwd - file: groups path: /etc/group Transforming lists into dictionaries ------------------------------------ .. versionadded:: 2.7 Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:: {{ tags | items2dict }} List data (before applying the ``items2dict`` filter):: tags: - key: Application value: payment - key: Environment value: dev Dictionary data (after applying the ``items2dict`` filter):: Application: payment Environment: dev The ``items2dict`` filter is the reverse of the ``dict2items`` filter. Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:: fruits: - fruit: apple color: red - fruit: pear color: yellow - fruit: grapefruit color: yellow In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:: {{ tags | items2dict(key_name='fruit', value_name='color') }} If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``. Forcing the data type --------------------- You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:: - debug: msg: test when: some_string_value | bool If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:: - shell: echo "only on Red Hat 6, derivatives, and later" when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6 .. versionadded:: 1.6 .. _filters_for_formatting_data: Formatting data: YAML and JSON ============================== You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:: {{ some_variable | to_json }} {{ some_variable | to_yaml }} For human readable output, you can use:: {{ some_variable | to_nice_json }} {{ some_variable | to_nice_yaml }} You can change the indentation of either format:: {{ some_variable | to_nice_json(indent=2) }} {{ some_variable | to_nice_yaml(indent=8) }} The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol) To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:: {{ some_variable | to_yaml(indent=8, width=1337) }} {{ some_variable | to_nice_yaml(indent=8, width=1337) }} The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_. If you are reading in some already formatted data:: {{ some_variable | from_json }} {{ some_variable | from_yaml }} for example:: tasks: - name: Register JSON output as a variable ansible.builtin.shell: cat /some/path/to/file.json register: result - name: Set a variable ansible.builtin.set_fact: myvar: "{{ result.stdout | from_json }}" Filter `to_json` and Unicode support ------------------------------------ By default `to_json` and `to_nice_json` will convert data received to ASCII, so:: {{ 'München'| to_json }} will return:: 'M\u00fcnchen' To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:: {{ 'München'| to_json(ensure_ascii=False) }} 'München' .. versionadded:: 2.7 To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided. The ``from_yaml_all`` filter will return a generator of parsed YAML documents. for example:: tasks: - name: Register a file content as a variable ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml register: result - name: Print the transformed variable ansible.builtin.debug: msg: '{{ item }}' loop: '{{ result.stdout | from_yaml_all | list }}' Combining and selecting data ============================ You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data. .. _zip_filter: Combining items from multiple lists: zip and zip_longest -------------------------------------------------------- .. versionadded:: 2.3 To get a list combining the elements of other lists use ``zip``:: - name: Give me list combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}" # => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]] - name: Give me shortest combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}" # => [[1, "a"], [2, "b"], [3, "c"]] To always exhaust all lists use ``zip_longest``:: - name: Give me longest combo of three lists , fill with X ansible.builtin.debug: msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}" # => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]] Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:: {{ dict(keys_list | zip(values_list)) }} List data (before applying the ``zip`` filter):: keys_list: - one - two values_list: - apple - orange Dictionary data (after applying the ``zip`` filter):: one: apple two: orange Combining objects and subelements --------------------------------- .. versionadded:: 2.7 The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:: {{ users | subelements('groups', skip_missing=True) }} Data before applying the ``subelements`` filter:: users: - name: alice authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub groups: - wheel - docker - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker Data after applying the ``subelements`` filter:: - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - wheel - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - docker - - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker - docker You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:: - name: Set authorized ssh key, extracting just that data from 'users' ansible.posix.authorized_key: user: "{{ item.0.name }}" key: "{{ lookup('file', item.1) }}" loop: "{{ users | subelements('authorized') }}" .. _combine_filter: Combining hashes/dictionaries ----------------------------- .. versionadded:: 2.0 The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:: {{ {'a':1, 'b':2} | combine({'b':3}) }} The resulting hash would be:: {'a':1, 'b':3} The filter can also take multiple arguments to merge:: {{ a | combine(b, c, d) }} {{ [a, b, c, d] | combine }} In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on. The filter also accepts two optional parameters: ``recursive`` and ``list_merge``. recursive Is a boolean, default to ``False``. Should the ``combine`` recursively merge nested hashes. Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``. list_merge Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``. It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists. .. code-block:: yaml default: a: x: default y: default b: default c: default patch: a: y: patch z: patch b: patch If ``recursive=False`` (the default), nested hash aren't merged:: {{ default | combine(patch) }} This would result in:: a: y: patch z: patch b: patch c: default If ``recursive=True``, recurse into nested hash and merge their keys:: {{ default | combine(patch, recursive=True) }} This would result in:: a: x: default y: patch z: patch b: patch c: default If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:: default: a: - default patch: a: - patch .. code-block:: jinja {{ default | combine(patch) }} This would result in:: a: - patch If ``list_merge='keep'``, arrays from the left hash will be kept:: {{ default | combine(patch, list_merge='keep') }} This would result in:: a: - default If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:: {{ default | combine(patch, list_merge='append') }} This would result in:: a: - default - patch If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:: {{ default | combine(patch, list_merge='prepend') }} This would result in:: a: - patch - default If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:: default: a: - 1 - 1 - 2 - 3 patch: a: - 3 - 4 - 5 - 5 .. code-block:: jinja {{ default | combine(patch, list_merge='append_rp') }} This would result in:: a: - 1 - 1 - 2 - 3 - 4 - 5 - 5 If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:: {{ default | combine(patch, list_merge='prepend_rp') }} This would result in:: a: - 3 - 4 - 5 - 5 - 1 - 1 - 2 ``recursive`` and ``list_merge`` can be used together:: default: a: a': x: default_value y: default_value list: - default_value b: - 1 - 1 - 2 - 3 patch: a: a': y: patch_value z: patch_value list: - patch_value b: - 3 - 4 - 4 - key: value .. code-block:: jinja {{ default | combine(patch, recursive=True, list_merge='append_rp') }} This would result in:: a: a': x: default_value y: patch_value z: patch_value list: - default_value - patch_value b: - 1 - 1 - 2 - 3 - 4 - 4 - key: value .. _extract_filter: Selecting values from arrays or hashtables ------------------------------------------- .. versionadded:: 2.1 The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):: {{ [0,2] | map('extract', ['x','y','z']) | list }} {{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }} The results of the above expressions would be:: ['x', 'z'] [42, 31] The filter can take another argument:: {{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }} This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'. The third argument to the filter can also be a list, for a recursive lookup inside the container:: {{ ['a'] | map('extract', b, ['x','y']) | list }} This would return a list containing the value of `b['a']['x']['y']`. Combining lists --------------- This set of filters returns a list of combined lists. permutations ^^^^^^^^^^^^ To get permutations of a list:: - name: Give me largest permutations (order matters) ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}" - name: Give me permutations of sets of three ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}" combinations ^^^^^^^^^^^^ Combinations always require a set size:: - name: Give me combinations for sets of two ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}" Also see the :ref:`zip_filter` products ^^^^^^^^ The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression. For example:: - name: Generate multiple hostnames ansible.builtin.debug: msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}" This would result in:: { "msg": "foo.com,bar.com" } .. json_query_filter: Selecting JSON data: JSON queries --------------------------------- To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure. .. note:: This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection. .. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_. Consider this data structure:: { "domain_definition": { "domain": { "cluster": [ { "name": "cluster1" }, { "name": "cluster2" } ], "server": [ { "name": "server11", "cluster": "cluster1", "port": "8080" }, { "name": "server12", "cluster": "cluster1", "port": "8090" }, { "name": "server21", "cluster": "cluster2", "port": "9080" }, { "name": "server22", "cluster": "cluster2", "port": "9090" } ], "library": [ { "name": "lib1", "target": "cluster1" }, { "name": "lib2", "target": "cluster2" } ] } } } To extract all clusters from this structure, you can use the following query:: - name: Display all cluster names ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}" To extract all server names:: - name: Display all server names ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}" To extract ports from cluster1:: - name: Display all ports from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port" .. note:: You can use a variable to make the query more readable. To print out the ports from cluster1 in a comma separated string:: - name: Display all ports from cluster1 as a string ansible.builtin.debug: msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}" .. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability. You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:: - name: Display all ports from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}" .. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote. To get a hash map with all ports and names of a cluster:: - name: Display all server ports and names from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}" To extract ports from all clusters with name starting with 'server1':: - name: Display all ports from cluster1 ansible.builtin.debug: msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}" vars: server_name_query: "domain.server[?starts_with(name,'server1')].port" To extract ports from all clusters with name containing 'server1':: - name: Display all ports from cluster1 ansible.builtin.debug: msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}" vars: server_name_query: "domain.server[?contains(name,'server1')].port" .. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure. Randomizing data ================ When you need a randomly generated value, use one of these filters. .. _random_mac_filter: Random MAC addresses -------------------- .. versionadded:: 2.6 This filter can be used to generate a random MAC address from a string prefix. .. note:: This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection. To get a random MAC address from a string prefix starting with '52:54:00':: "{{ '52:54:00' | community.general.random_mac }}" # => '52:54:00:ef:1c:03' Note that if anything is wrong with the prefix string, the filter will issue an error. .. versionadded:: 2.9 As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:: "{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}" .. _random_filter: Random items or numbers ----------------------- The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range. To get a random item from a list:: "{{ ['a','b','c'] | random }}" # => 'c' To get a random number between 0 (inclusive) and a specified integer (exclusive):: "{{ 60 | random }} * * * * root /script/from/cron" # => '21 * * * * root /script/from/cron' To get a random number from 0 to 100 but in steps of 10:: {{ 101 | random(step=10) }} # => 70 To get a random number from 1 to 100 but in steps of 10:: {{ 101 | random(1, 10) }} # => 31 {{ 101 | random(start=1, step=10) }} # => 51 You can initialize the random number generator from a seed to create random-but-idempotent numbers:: "{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron" Shuffling a list ---------------- The ``shuffle`` filter randomizes an existing list, giving a different order every invocation. To get a random list from an existing list:: {{ ['a','b','c'] | shuffle }} # => ['c','a','b'] {{ ['a','b','c'] | shuffle }} # => ['b','c','a'] You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:: {{ ['a','b','c'] | shuffle(seed=inventory_hostname) }} # => ['b','a','c'] The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing. .. _list_filters: Managing list variables ======================= You can search for the minimum or maximum value in a list, or flatten a multi-level list. To get the minimum value from list of numbers:: {{ list1 | min }} .. versionadded:: 2.11 To get the minimum value in a list of objects:: {{ [{'val': 1}, {'val': 2}] | min(attribute='val') }} To get the maximum value from a list of numbers:: {{ [3, 4, 2] | max }} .. versionadded:: 2.11 To get the maximum value in a list of objects:: {{ [{'val': 1}, {'val': 2}] | max(attribute='val') }} .. versionadded:: 2.5 Flatten a list (same thing the `flatten` lookup does):: {{ [3, [4, 2] ] | flatten }} # => [3, 4, 2] Flatten only the first level of a list (akin to the `items` lookup):: {{ [3, [4, [2]] ] | flatten(levels=1) }} # => [3, 4, [2]] .. versionadded:: 2.11 Preserve nulls in a list, by default flatten removes them. :: {{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }} # => [3, None, 4, [2]] .. _set_theory_filters: Selecting from sets or lists (set theory) ========================================= You can select or combine items from sets or lists. .. versionadded:: 1.4 To get a unique set from a list:: # list1: [1, 2, 5, 1, 3, 4, 10] {{ list1 | unique }} # => [1, 2, 5, 3, 4, 10] To get a union of two lists:: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | union(list2) }} # => [1, 2, 5, 1, 3, 4, 10, 11, 99] To get the intersection of 2 lists (unique list of all items in both):: # list1: [1, 2, 5, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | intersect(list2) }} # => [1, 2, 5, 3, 4] To get the difference of 2 lists (items in 1 that don't exist in 2):: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | difference(list2) }} # => [10] To get the symmetric difference of 2 lists (items exclusive to each list):: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | symmetric_difference(list2) }} # => [10, 11, 99] .. _math_stuff: Calculating numbers (math) ========================== .. versionadded:: 1.9 You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round(). Get the logarithm (default is e):: {{ 8 | log }} # => 2.0794415416798357 Get the base 10 logarithm:: {{ 8 | log(10) }} # => 0.9030899869919435 Give me the power of 2! (or 5):: {{ 8 | pow(5) }} # => 32768.0 Square root, or the 5th:: {{ 8 | root }} # => 2.8284271247461903 {{ 8 | root(5) }} # => 1.5157165665103982 Managing network interactions ============================= These filters help you with common network tasks. .. note:: These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection. .. _ipaddr_filter: IP address filters ------------------ .. versionadded:: 1.9 To test if a string is a valid IP address:: {{ myvar | ansible.netcommon.ipaddr }} You can also require a specific IP protocol version:: {{ myvar | ansible.netcommon.ipv4 }} {{ myvar | ansible.netcommon.ipv6 }} IP address filter can also be used to extract specific information from an IP address. For example, to get the IP address itself from a CIDR, you can use:: {{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }} # => 192.168.0.1 More information about ``ipaddr`` filter and complete usage guide can be found in :ref:`playbooks_filters_ipaddr`. .. _network_filters: Network CLI filters ------------------- .. versionadded:: 2.4 To convert the output of a network device CLI command into structured JSON output, use the ``parse_cli`` filter:: {{ output | ansible.netcommon.parse_cli('path/to/spec') }} The ``parse_cli`` filter will load the spec file and pass the command output through it, returning JSON output. The YAML spec file defines how to parse the CLI output. The spec file should be valid formatted YAML. It defines how to parse the CLI output and return JSON data. Below is an example of a valid spec file that will parse the output from the ``show vlan`` command. .. code-block:: yaml --- vars: vlan: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" enabled: "{{ item.state != 'act/lshut' }}" state: "{{ item.state }}" keys: vlans: value: "{{ vlan }}" items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)" state_static: value: present The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information. The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same ``show vlan`` command. .. code-block:: yaml --- vars: vlan: key: "{{ item.vlan_id }}" values: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" enabled: "{{ item.state != 'act/lshut' }}" state: "{{ item.state }}" keys: vlans: value: "{{ vlan }}" items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)" state_static: value: present Another common use case for parsing CLI commands is to break a large command into blocks that can be parsed. This can be done using the ``start_block`` and ``end_block`` directives to break the command into blocks that can be parsed. .. code-block:: yaml --- vars: interface: name: "{{ item[0].match[0] }}" state: "{{ item[1].state }}" mode: "{{ item[2].match[0] }}" keys: interfaces: value: "{{ interface }}" start_block: "^Ethernet.*$" end_block: "^$" items: - "^(?P<name>Ethernet\\d\\/\\d*)" - "admin state is (?P<state>.+)," - "Port mode is (.+)" The example above will parse the output of ``show interface`` into a list of hashes. The network filters also support parsing the output of a CLI command using the TextFSM library. To parse the CLI output with TextFSM use the following filter:: {{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }} Use of the TextFSM filter requires the TextFSM library to be installed. Network XML filters ------------------- .. versionadded:: 2.5 To convert the XML output of a network device command into structured JSON output, use the ``parse_xml`` filter:: {{ output | ansible.netcommon.parse_xml('path/to/spec') }} The ``parse_xml`` filter will load the spec file and pass the command output through formatted as JSON. The spec file should be valid formatted YAML. It defines how to parse the XML output and return JSON data. Below is an example of a valid spec file that will parse the output from the ``show vlan | display xml`` command. .. code-block:: yaml --- vars: vlan: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" desc: "{{ item.desc }}" enabled: "{{ item.state.get('inactive') != 'inactive' }}" state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}" keys: vlans: value: "{{ vlan }}" top: configuration/vlans/vlan items: vlan_id: vlan-id name: name desc: description state: ".[@inactive='inactive']" The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information. The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same ``show vlan | display xml`` command. .. code-block:: yaml --- vars: vlan: key: "{{ item.vlan_id }}" values: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" desc: "{{ item.desc }}" enabled: "{{ item.state.get('inactive') != 'inactive' }}" state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}" keys: vlans: value: "{{ vlan }}" top: configuration/vlans/vlan items: vlan_id: vlan-id name: name desc: description state: ".[@inactive='inactive']" The value of ``top`` is the XPath relative to the XML root node. In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``, which is an XPath expression relative to the root node (<rpc-reply>). ``configuration`` in the value of ``top`` is the outer most container node, and ``vlan`` is the inner-most container node. ``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``. For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the relative to the value of XPath in ``top`` Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:: <rpc-reply> <configuration> <vlans> <vlan inactive="inactive"> <name>vlan-1</name> <vlan-id>200</vlan-id> <description>This is vlan-1</description> </vlan> </vlans> </configuration> </rpc-reply> .. note:: For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_. Network VLAN filters -------------------- .. versionadded:: 2.8 Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties: * Vlans are listed in ascending order. * Three or more consecutive VLANs are listed with a dash. * The first line of the list can be first_line_len characters long. * Subsequent list lines can be other_line_len characters. To sort a VLAN list:: {{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }} This example renders the following sorted list:: ['100,1688,3002-3005,3999'] Another example Jinja template:: {% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %} switchport trunk allowed vlan {{ parsed_vlans[0] }} {% for i in range (1, parsed_vlans | count) %} switchport trunk allowed vlan add {{ parsed_vlans[i] }} {% endfor %} This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration. .. _hash_filters: Encrypting and checksumming strings and passwords ================================================= .. versionadded:: 1.9 To get the sha1 hash of a string:: {{ 'test1' | hash('sha1') }} # => "b444ac06613fc8d63795be9ad0beaf55011936ac" To get the md5 hash of a string:: {{ 'test1' | hash('md5') }} # => "5a105e8b9d40e1329780d62ea2265d8a" Get a string checksum:: {{ 'test2' | checksum }} # => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f" Other hashes (platform dependent):: {{ 'test2' | hash('blowfish') }} To get a sha512 password hash (random salt):: {{ 'passwordsaresecret' | password_hash('sha512') }} # => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/" To get a sha256 password hash with a specific salt:: {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }} # => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4" An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:: {{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }} # => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0" Hash types available depend on the control system running Ansible, 'hash' depends on hashlib, password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html). .. versionadded:: 2.7 Some hash types allow providing a rounds parameter:: {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }} # => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7" .. _other_useful_filters: Manipulating text ================= Several filters work with text, including URLs, file names, and path names. .. _comment_filter: Adding comments to files ------------------------ The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:: {{ "Plain style (default)" | comment }} produces this output: .. code-block:: text # # Plain style (default) # Ansible offers styles for comments in C (``//...``), C block (``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):: {{ "C style" | comment('c') }} {{ "C block style" | comment('cblock') }} {{ "Erlang style" | comment('erlang') }} {{ "XML style" | comment('xml') }} You can define a custom comment character. This filter:: {{ "My Special Case" | comment(decoration="! ") }} produces: .. code-block:: text ! ! My Special Case ! You can fully customize the comment style:: {{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }} That creates the following output: .. code-block:: text ####### # # Custom style # ####### ### # The filter can also be applied to any Ansible variable. For example to make the output of the ``ansible_managed`` variable more readable, we can change the definition in the ``ansible.cfg`` file to this: .. code-block:: jinja [defaults] ansible_managed = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} and then use the variable with the `comment` filter:: {{ ansible_managed | comment }} which produces this output: .. code-block:: sh # # This file is managed by Ansible. # # template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2 # date: 2015-09-10 11:02:58 # user: ansible # host: myhost # URLEncode Variables ------------------- The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:: {{ 'Trollhättan' | urlencode }} # => 'Trollh%C3%A4ttan' Splitting URLs -------------- .. versionadded:: 2.4 The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:: {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }} # => 'www.acme.com' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }} # => 'user:[email protected]:9000' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }} # => 'user' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }} # => 'password' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }} # => '/dir/index.html' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }} # => '9000' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }} # => 'http' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }} # => 'query=term' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }} # => 'fragment' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }} # => # { # "fragment": "fragment", # "hostname": "www.acme.com", # "netloc": "user:[email protected]:9000", # "password": "password", # "path": "/dir/index.html", # "port": 9000, # "query": "query=term", # "scheme": "http", # "username": "user" # } Searching strings with regular expressions ------------------------------------------ To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:: # Extracts the database name from a string {{ 'server1/database42' | regex_search('database[0-9]+') }} # => 'database42' # Returns an empty string if it cannot find a match {{ 'ansible' | regex_search('foobar') }} # => '' # Example for a case insensitive search in multiline mode {{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }} # => 'BAR' # Extracts server and database id from a string {{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }} # => ['1', '42'] # Extracts dividend and divisor from a division {{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }} # => ['21', '42'] To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:: # Returns a list of all IPv4 addresses in the string {{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }} # => ['8.8.8.8', '8.8.4.4'] # Returns all lines that end with "ar" {{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }} # => ['CAR', 'tar', 'bar'] To replace text in a string with regex, use the ``regex_replace`` filter:: # Convert "ansible" to "able" {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }} # => 'able' # Convert "foobar" to "bar" {{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }} # => 'bar' # Convert "localhost:80" to "localhost, 80" using named groups {{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }} # => 'localhost, 80' # Convert "localhost:80" to "localhost" {{ 'localhost:80' | regex_replace(':80') }} # => 'localhost' # Comment all lines that end with "ar" {{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }} # => '#CAR\n#tar\nfoo\n#bar\n' .. note:: If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:: # add "https://" prefix to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }} {{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }} {{ hosts | map('regex_replace', '^', 'https://') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }} # append ':80' to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }} {{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }} {{ hosts | map('regex_replace', '$', ':80') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }} .. note:: Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``). .. versionadded:: 2.0 To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):: # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$' {{ '^f.*o(.*)$' | regex_escape() }} .. versionadded:: 2.8 To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:: # convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$' {{ '^f.*o(.*)$' | regex_escape('posix_basic') }} Managing file names and path names ---------------------------------- To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':: {{ path | basename }} To get the last name of a windows style file path (new in version 2.0):: {{ path | win_basename }} To separate the windows drive letter from the rest of a file path (new in version 2.0):: {{ path | win_splitdrive }} To get only the windows drive letter:: {{ path | win_splitdrive | first }} To get the rest of the path without the drive letter:: {{ path | win_splitdrive | last }} To get the directory from a path:: {{ path | dirname }} To get the directory from a windows path (new version 2.0):: {{ path | win_dirname }} To expand a path containing a tilde (`~`) character (new in version 1.5):: {{ path | expanduser }} To expand a path containing environment variables:: {{ path | expandvars }} .. note:: `expandvars` expands local variables; using it on remote paths can lead to errors. .. versionadded:: 2.6 To get the real path of a link (new in version 1.8):: {{ path | realpath }} To get the relative path of a link, from a start point (new in version 1.7):: {{ path | relpath('/etc') }} To get the root and extension of a path or file name (new in version 2.0):: # with path == 'nginx.conf' the return would be ('nginx', '.conf') {{ path | splitext }} The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:: # with path == 'nginx.conf' the return would be 'nginx' {{ path | splitext | first }} # with path == 'nginx.conf' the return would be '.conf' {{ path | splitext | last }} To join one or more path components:: {{ ('/etc', path, 'subdir', file) | path_join }} .. versionadded:: 2.10 Manipulating strings ==================== To add quotes for shell usage:: - name: Run a shell command ansible.builtin.shell: echo {{ string_value | quote }} To concatenate a list into a string:: {{ list | join(" ") }} To split a sting into a list:: .. versionadded:: 2.11 {{ csv_string | split(",") }} To work with Base64 encoded strings:: {{ encoded | b64decode }} {{ decoded | string | b64encode }} As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:: {{ encoded | b64decode(encoding='utf-16-le') }} {{ decoded | string | b64encode(encoding='utf-16-le') }} .. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded. .. versionadded:: 2.6 Managing UUIDs ============== To create a namespaced UUIDv5:: {{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }} .. versionadded:: 2.10 To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':: {{ string | to_uuid }} .. versionadded:: 1.9 To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:map>`:: # get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host {{ ansible_mounts | map(attribute='mount') | join(',') }} Handling dates and times ======================== To get a date object from a string use the `to_datetime` filter:: # Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }} # Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds() {{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }} # This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds # get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }} .. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior. .. versionadded:: 2.4 To format a date using a string (like with the shell date command), use the "strftime" filter:: # Display year-month-day {{ '%Y-%m-%d' | strftime }} # => "2021-03-19" # Display hour:min:sec {{ '%H:%M:%S' | strftime }} # => "21:51:04" # Use ansible_date_time.epoch fact {{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }} # => "2021-03-19 21:54:09" # Use arbitrary epoch value {{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01 {{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04 .. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime Getting Kubernetes resource names ================================= .. note:: These filters have migrated to the `kuberernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection. Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret, including its hash:: {{ configmap_resource_definition | kuberernetes.core.k8s_config_resource_name }} This can then be used to reference hashes in Pod specifications:: my_secret: kind: Secret metadata: name: my_secret_name deployment_resource: kind: Deployment spec: template: spec: containers: - envFrom: - secretRef: name: {{ my_secret | kuberernetes.core.k8s_config_resource_name }} .. versionadded:: 2.8 .. _PyYAML library: https://pyyaml.org/ .. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation .. seealso:: :ref:`about_playbooks` An introduction to playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables :ref:`playbooks_loops` Looping in playbooks :ref:`playbooks_reuse_roles` Playbook organization by roles :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
71,343
yum_repository logs Invoked with at emergency level
##### SUMMARY When using the yum_repository without setting a priority param, the call to _log_invocation logs at emergency level to the journal ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum_repository ##### ANSIBLE VERSION ```ansible 2.9.0 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ```[root@yolandi ansible]# ansible-config dump --only-changed [root@yolandi ansible]# ``` ##### OS / ENVIRONMENT CentOS 7.8 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` - hosts: localhost tasks: - name: Create dummy yum repo yum_repository: name: "Dummy}" description: "Clearly a dummy" baseurl: "https://example.com" enabled: no ``` ##### EXPECTED RESULTS _log_invocation isn't logged at a high level thus getting caught by alert level filtering ##### ACTUAL RESULTS Because log_args gets passed in as `priority='None'`, the log_invocation call gets logged at the highest level ``` PLAY [localhost] ************************************************************************************************ TASK [Gathering Facts] ****************************************************************************************** ok: [localhost] TASK [Create dummy yum repo] ************************************************************************************ 2020 Aug 19 11:02:48 yolandi ansible-yum_repository Invoked with name=Dummy} description=Clearly a dummy baseurl=['https://example.com'] enabled=False reposdir=/etc/yum.repos.d state=present follow=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgcheck=None gpgkey=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None params=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None unsafe_writes=None ok: [localhost] PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/71343
https://github.com/ansible/ansible/pull/74559
2c93b220438797581dc34232f67ce08a2f8ad33b
1006363589f2c82e3feba43253532b76f6c232ba
2020-08-19T01:04:14Z
python
2021-05-11T19:05:56Z
changelogs/fragments/71343_yum_repository.yml
closed
ansible/ansible
https://github.com/ansible/ansible
71,343
yum_repository logs Invoked with at emergency level
##### SUMMARY When using the yum_repository without setting a priority param, the call to _log_invocation logs at emergency level to the journal ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME yum_repository ##### ANSIBLE VERSION ```ansible 2.9.0 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ```[root@yolandi ansible]# ansible-config dump --only-changed [root@yolandi ansible]# ``` ##### OS / ENVIRONMENT CentOS 7.8 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` - hosts: localhost tasks: - name: Create dummy yum repo yum_repository: name: "Dummy}" description: "Clearly a dummy" baseurl: "https://example.com" enabled: no ``` ##### EXPECTED RESULTS _log_invocation isn't logged at a high level thus getting caught by alert level filtering ##### ACTUAL RESULTS Because log_args gets passed in as `priority='None'`, the log_invocation call gets logged at the highest level ``` PLAY [localhost] ************************************************************************************************ TASK [Gathering Facts] ****************************************************************************************** ok: [localhost] TASK [Create dummy yum repo] ************************************************************************************ 2020 Aug 19 11:02:48 yolandi ansible-yum_repository Invoked with name=Dummy} description=Clearly a dummy baseurl=['https://example.com'] enabled=False reposdir=/etc/yum.repos.d state=present follow=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgcheck=None gpgkey=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None params=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None unsafe_writes=None ok: [localhost] PLAY RECAP ****************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/71343
https://github.com/ansible/ansible/pull/74559
2c93b220438797581dc34232f67ce08a2f8ad33b
1006363589f2c82e3feba43253532b76f6c232ba
2020-08-19T01:04:14Z
python
2021-05-11T19:05:56Z
lib/ansible/module_utils/basic.py
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013 # Copyright (c), Toshio Kuratomi <[email protected]> 2016 # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from __future__ import absolute_import, division, print_function __metaclass__ = type FILE_ATTRIBUTES = { 'A': 'noatime', 'a': 'append', 'c': 'compressed', 'C': 'nocow', 'd': 'nodump', 'D': 'dirsync', 'e': 'extents', 'E': 'encrypted', 'h': 'blocksize', 'i': 'immutable', 'I': 'indexed', 'j': 'journalled', 'N': 'inline', 's': 'zero', 'S': 'synchronous', 't': 'notail', 'T': 'blockroot', 'u': 'undelete', 'X': 'compressedraw', 'Z': 'compresseddirty', } # Ansible modules can be written in any language. # The functions available here can be used to do many common tasks, # to simplify development of Python modules. import __main__ import atexit import errno import datetime import grp import fcntl import locale import os import pwd import platform import re import select import shlex import shutil import signal import stat import subprocess import sys import tempfile import time import traceback import types from itertools import chain, repeat try: import syslog HAS_SYSLOG = True except ImportError: HAS_SYSLOG = False try: from systemd import journal # Makes sure that systemd.journal has method sendv() # Double check that journal has method sendv (some packages don't) has_journal = hasattr(journal, 'sendv') except ImportError: has_journal = False HAVE_SELINUX = False try: import ansible.module_utils.compat.selinux as selinux HAVE_SELINUX = True except ImportError: pass # Python2 & 3 way to get NoneType NoneType = type(None) from ansible.module_utils.compat import selectors from ._text import to_native, to_bytes, to_text from ansible.module_utils.common.text.converters import ( jsonify, container_to_bytes as json_dict_unicode_to_bytes, container_to_text as json_dict_bytes_to_unicode, ) from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator from ansible.module_utils.common.text.formatters import ( lenient_lowercase, bytes_to_human, human_to_bytes, SIZE_RANGES, ) try: from ansible.module_utils.common._json_compat import json except ImportError as e: print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e))) sys.exit(1) AVAILABLE_HASH_ALGORITHMS = dict() try: import hashlib # python 2.7.9+ and 2.7.0+ for attribute in ('available_algorithms', 'algorithms'): algorithms = getattr(hashlib, attribute, None) if algorithms: break if algorithms is None: # python 2.5+ algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') for algorithm in algorithms: AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm) # we may have been able to import md5 but it could still not be available try: hashlib.md5() except ValueError: AVAILABLE_HASH_ALGORITHMS.pop('md5', None) except Exception: import sha AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha} try: import md5 AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5 except Exception: pass from ansible.module_utils.common._collections_compat import ( KeysView, Mapping, MutableMapping, Sequence, MutableSequence, Set, MutableSet, ) from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.common.file import ( _PERM_BITS as PERM_BITS, _EXEC_PERM_BITS as EXEC_PERM_BITS, _DEFAULT_PERM as DEFAULT_PERM, is_executable, format_attributes, get_flags_from_attributes, ) from ansible.module_utils.common.sys_info import ( get_distribution, get_distribution_version, get_platform_subclass, ) from ansible.module_utils.pycompat24 import get_exception, literal_eval from ansible.module_utils.common.parameters import ( env_fallback, remove_values, sanitize_keys, DEFAULT_TYPE_VALIDATORS, PASS_VARS, PASS_BOOLS, ) from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError from ansible.module_utils.six import ( PY2, PY3, b, binary_type, integer_types, iteritems, string_types, text_type, ) from ansible.module_utils.six.moves import map, reduce, shlex_quote from ansible.module_utils.common.validation import ( check_missing_parameters, safe_eval, ) from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean from ansible.module_utils.common.warnings import ( deprecate, get_deprecation_messages, get_warning_messages, warn, ) # Note: When getting Sequence from collections, it matches with strings. If # this matters, make sure to check for strings before checking for sequencetype SEQUENCETYPE = frozenset, KeysView, Sequence PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I) imap = map try: # Python 2 unicode except NameError: # Python 3 unicode = text_type try: # Python 2 basestring except NameError: # Python 3 basestring = string_types _literal_eval = literal_eval # End of deprecated names # Internal global holding passed in params. This is consulted in case # multiple AnsibleModules are created. Otherwise each AnsibleModule would # attempt to read from stdin. Other code should not use this directly as it # is an internal implementation detail _ANSIBLE_ARGS = None FILE_COMMON_ARGUMENTS = dict( # These are things we want. About setting metadata (mode, ownership, permissions in general) on # created files (these are used by set_fs_attributes_if_different and included in # load_file_common_arguments) mode=dict(type='raw'), owner=dict(type='str'), group=dict(type='str'), seuser=dict(type='str'), serole=dict(type='str'), selevel=dict(type='str'), setype=dict(type='str'), attributes=dict(type='str', aliases=['attr']), unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move ) PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?') # Used for parsing symbolic file perms MODE_OPERATOR_RE = re.compile(r'[+=-]') USERS_RE = re.compile(r'[^ugo]') PERMS_RE = re.compile(r'[^rwxXstugo]') # Used for determining if the system is running a new enough python version # and should only restrict on our documented minimum versions _PY3_MIN = sys.version_info[:2] >= (3, 5) _PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,) _PY26 = (2, 6) == sys.version_info[:2] _PY_MIN = _PY3_MIN or _PY2_MIN if not _PY_MIN: print( '\n{"failed": true, ' '"msg": "ansible-core requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines()) ) sys.exit(1) if _PY26: deprecate( 'ansible-core 2.13 will require Python 2.7 or newer on the target. ' 'Current version: %s' % ''.join(sys.version.splitlines()), version='2.13', ) # # Deprecated functions # def get_platform(): ''' **Deprecated** Use :py:func:`platform.system` directly. :returns: Name of the platform the module is running on in a native string Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is the result of calling :py:func:`platform.system`. ''' return platform.system() # End deprecated functions # # Compat shims # def load_platform_subclass(cls, *args, **kwargs): """**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead""" platform_cls = get_platform_subclass(cls) return super(cls, platform_cls).__new__(platform_cls) def get_all_subclasses(cls): """**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead""" return list(_get_all_subclasses(cls)) # End compat shims def heuristic_log_sanitize(data, no_log_values=None): ''' Remove strings that look like passwords from log messages ''' # Currently filters: # user:pass@foo/whatever and http://username:pass@wherever/foo # This code has false positives and consumes parts of logs that are # not passwds # begin: start of a passwd containing string # end: end of a passwd containing string # sep: char between user and passwd # prev_begin: where in the overall string to start a search for # a passwd # sep_search_end: where in the string to end a search for the sep data = to_native(data) output = [] begin = len(data) prev_begin = begin sep = 1 while sep: # Find the potential end of a passwd try: end = data.rindex('@', 0, begin) except ValueError: # No passwd in the rest of the data output.insert(0, data[0:begin]) break # Search for the beginning of a passwd sep = None sep_search_end = end while not sep: # URL-style username+password try: begin = data.rindex('://', 0, sep_search_end) except ValueError: # No url style in the data, check for ssh style in the # rest of the string begin = 0 # Search for separator try: sep = data.index(':', begin + 3, end) except ValueError: # No separator; choices: if begin == 0: # Searched the whole string so there's no password # here. Return the remaining data output.insert(0, data[0:begin]) break # Search for a different beginning of the password field. sep_search_end = begin continue if sep: # Password was found; remove it. output.insert(0, data[end:prev_begin]) output.insert(0, '********') output.insert(0, data[begin:sep + 1]) prev_begin = begin output = ''.join(output) if no_log_values: output = remove_values(output, no_log_values) return output def _load_params(): ''' read the modules parameters and store them globally. This function may be needed for certain very dynamic custom modules which want to process the parameters that are being handed the module. Since this is so closely tied to the implementation of modules we cannot guarantee API stability for it (it may change between versions) however we will try not to break it gratuitously. It is certainly more future-proof to call this function and consume its outputs than to implement the logic inside it as a copy in your own code. ''' global _ANSIBLE_ARGS if _ANSIBLE_ARGS is not None: buffer = _ANSIBLE_ARGS else: # debug overrides to read args from file or cmdline # Avoid tracebacks when locale is non-utf8 # We control the args and we pass them as utf8 if len(sys.argv) > 1: if os.path.isfile(sys.argv[1]): fd = open(sys.argv[1], 'rb') buffer = fd.read() fd.close() else: buffer = sys.argv[1] if PY3: buffer = buffer.encode('utf-8', errors='surrogateescape') # default case, read from stdin else: if PY2: buffer = sys.stdin.read() else: buffer = sys.stdin.buffer.read() _ANSIBLE_ARGS = buffer try: params = json.loads(buffer.decode('utf-8')) except ValueError: # This helper used too early for fail_json to work. print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}') sys.exit(1) if PY2: params = json_dict_unicode_to_bytes(params) try: return params['ANSIBLE_MODULE_ARGS'] except KeyError: # This helper does not have access to fail_json so we have to print # json output on our own. print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", ' '"failed": true}') sys.exit(1) def missing_required_lib(library, reason=None, url=None): hostname = platform.node() msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable) if reason: msg += " This is required %s." % reason if url: msg += " See %s for more info." % url msg += (" Please read the module documentation and install it in the appropriate location." " If the required library is installed, but Ansible is using the wrong Python interpreter," " please consult the documentation on ansible_python_interpreter") return msg class AnsibleModule(object): def __init__(self, argument_spec, bypass_checks=False, no_log=False, mutually_exclusive=None, required_together=None, required_one_of=None, add_file_common_args=False, supports_check_mode=False, required_if=None, required_by=None): ''' Common code for quickly building an ansible module in Python (although you can write modules with anything that can return JSON). See :ref:`developing_modules_general` for a general introduction and :ref:`developing_program_flow_modules` for more detailed explanation. ''' self._name = os.path.basename(__file__) # initialize name until we can parse from options self.argument_spec = argument_spec self.supports_check_mode = supports_check_mode self.check_mode = False self.bypass_checks = bypass_checks self.no_log = no_log self.mutually_exclusive = mutually_exclusive self.required_together = required_together self.required_one_of = required_one_of self.required_if = required_if self.required_by = required_by self.cleanup_files = [] self._debug = False self._diff = False self._socket_path = None self._shell = None self._syslog_facility = 'LOG_USER' self._verbosity = 0 # May be used to set modifications to the environment for any # run_command invocation self.run_command_environ_update = {} self._clean = {} self._string_conversion_action = '' self.aliases = {} self._legal_inputs = [] self._options_context = list() self._tmpdir = None if add_file_common_args: for k, v in FILE_COMMON_ARGUMENTS.items(): if k not in self.argument_spec: self.argument_spec[k] = v # Save parameter values that should never be logged self.no_log_values = set() # check the locale as set by the current environment, and reset to # a known valid (LANG=C) if it's an invalid/unavailable locale self._check_locale() self._load_params() self._set_internal_properties() self.validator = ModuleArgumentSpecValidator(self.argument_spec, self.mutually_exclusive, self.required_together, self.required_one_of, self.required_if, self.required_by, ) self.validation_result = self.validator.validate(self.params) self.params.update(self.validation_result.validated_parameters) self.no_log_values.update(self.validation_result._no_log_values) try: error = self.validation_result.errors[0] except IndexError: error = None # Fail for validation errors, even in check mode if error: msg = self.validation_result.errors.msg if isinstance(error, UnsupportedError): msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg) self.fail_json(msg=msg) if self.check_mode and not self.supports_check_mode: self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name) # This is for backwards compatibility only. self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS if not self.no_log: self._log_invocation() # selinux state caching self._selinux_enabled = None self._selinux_mls_enabled = None self._selinux_initial_context = None # finally, make sure we're in a sane working dir self._set_cwd() @property def tmpdir(self): # if _ansible_tmpdir was not set and we have a remote_tmp, # the module needs to create it and clean it up once finished. # otherwise we create our own module tmp dir from the system defaults if self._tmpdir is None: basedir = None if self._remote_tmp is not None: basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp)) if basedir is not None and not os.path.exists(basedir): try: os.makedirs(basedir, mode=0o700) except (OSError, IOError) as e: self.warn("Unable to use %s as temporary directory, " "failing back to system: %s" % (basedir, to_native(e))) basedir = None else: self.warn("Module remote_tmp %s did not exist and was " "created with a mode of 0700, this may cause" " issues when running as another user. To " "avoid this, create the remote_tmp dir with " "the correct permissions manually" % basedir) basefile = "ansible-moduletmp-%s-" % time.time() try: tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir) except (OSError, IOError) as e: self.fail_json( msg="Failed to create remote module tmp path at dir %s " "with prefix %s: %s" % (basedir, basefile, to_native(e)) ) if not self._keep_remote_files: atexit.register(shutil.rmtree, tmpdir) self._tmpdir = tmpdir return self._tmpdir def warn(self, warning): warn(warning) self.log('[WARNING] %s' % warning) def deprecate(self, msg, version=None, date=None, collection_name=None): if version is not None and date is not None: raise AssertionError("implementation error -- version and date must not both be set") deprecate(msg, version=version, date=date, collection_name=collection_name) # For compatibility, we accept that neither version nor date is set, # and treat that the same as if version would haven been set if date is not None: self.log('[DEPRECATION WARNING] %s %s' % (msg, date)) else: self.log('[DEPRECATION WARNING] %s %s' % (msg, version)) def load_file_common_arguments(self, params, path=None): ''' many modules deal with files, this encapsulates common options that the file module accepts such that it is directly available to all modules and they can share code. Allows to overwrite the path/dest module argument by providing path. ''' if path is None: path = params.get('path', params.get('dest', None)) if path is None: return {} else: path = os.path.expanduser(os.path.expandvars(path)) b_path = to_bytes(path, errors='surrogate_or_strict') # if the path is a symlink, and we're following links, get # the target of the link instead for testing if params.get('follow', False) and os.path.islink(b_path): b_path = os.path.realpath(b_path) path = to_native(b_path) mode = params.get('mode', None) owner = params.get('owner', None) group = params.get('group', None) # selinux related options seuser = params.get('seuser', None) serole = params.get('serole', None) setype = params.get('setype', None) selevel = params.get('selevel', None) secontext = [seuser, serole, setype] if self.selinux_mls_enabled(): secontext.append(selevel) default_secontext = self.selinux_default_context(path) for i in range(len(default_secontext)): if i is not None and secontext[i] == '_default': secontext[i] = default_secontext[i] attributes = params.get('attributes', None) return dict( path=path, mode=mode, owner=owner, group=group, seuser=seuser, serole=serole, setype=setype, selevel=selevel, secontext=secontext, attributes=attributes, ) # Detect whether using selinux that is MLS-aware. # While this means you can set the level/range with # selinux.lsetfilecon(), it may or may not mean that you # will get the selevel as part of the context returned # by selinux.lgetfilecon(). def selinux_mls_enabled(self): if self._selinux_mls_enabled is None: self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1 return self._selinux_mls_enabled def selinux_enabled(self): if self._selinux_enabled is None: self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1 return self._selinux_enabled # Determine whether we need a placeholder for selevel/mls def selinux_initial_context(self): if self._selinux_initial_context is None: self._selinux_initial_context = [None, None, None] if self.selinux_mls_enabled(): self._selinux_initial_context.append(None) return self._selinux_initial_context # If selinux fails to find a default, return an array of None def selinux_default_context(self, path, mode=0): context = self.selinux_initial_context() if not self.selinux_enabled(): return context try: ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode) except OSError: return context if ret[0] == -1: return context # Limit split to 4 because the selevel, the last in the list, # may contain ':' characters context = ret[1].split(':', 3) return context def selinux_context(self, path): context = self.selinux_initial_context() if not self.selinux_enabled(): return context try: ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict')) except OSError as e: if e.errno == errno.ENOENT: self.fail_json(path=path, msg='path %s does not exist' % path) else: self.fail_json(path=path, msg='failed to retrieve selinux context') if ret[0] == -1: return context # Limit split to 4 because the selevel, the last in the list, # may contain ':' characters context = ret[1].split(':', 3) return context def user_and_group(self, path, expand=True): b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) st = os.lstat(b_path) uid = st.st_uid gid = st.st_gid return (uid, gid) def find_mount_point(self, path): ''' Takes a path and returns it's mount point :param path: a string type with a filesystem path :returns: the path to the mount point as a text type ''' b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict')) while not os.path.ismount(b_path): b_path = os.path.dirname(b_path) return to_text(b_path, errors='surrogate_or_strict') def is_special_selinux_path(self, path): """ Returns a tuple containing (True, selinux_context) if the given path is on a NFS or other 'special' fs mount point, otherwise the return will be (False, None). """ try: f = open('/proc/mounts', 'r') mount_data = f.readlines() f.close() except Exception: return (False, None) path_mount_point = self.find_mount_point(path) for line in mount_data: (device, mount_point, fstype, options, rest) = line.split(' ', 4) if to_bytes(path_mount_point) == to_bytes(mount_point): for fs in self._selinux_special_fs: if fs in fstype: special_context = self.selinux_context(path_mount_point) return (True, special_context) return (False, None) def set_default_selinux_context(self, path, changed): if not self.selinux_enabled(): return changed context = self.selinux_default_context(path) return self.set_context_if_different(path, context, False) def set_context_if_different(self, path, context, changed, diff=None): if not self.selinux_enabled(): return changed if self.check_file_absent_if_check_mode(path): return True cur_context = self.selinux_context(path) new_context = list(cur_context) # Iterate over the current context instead of the # argument context, which may have selevel. (is_special_se, sp_context) = self.is_special_selinux_path(path) if is_special_se: new_context = sp_context else: for i in range(len(cur_context)): if len(context) > i: if context[i] is not None and context[i] != cur_context[i]: new_context[i] = context[i] elif context[i] is None: new_context[i] = cur_context[i] if cur_context != new_context: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['secontext'] = cur_context if 'after' not in diff: diff['after'] = {} diff['after']['secontext'] = new_context try: if self.check_mode: return True rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context)) except OSError as e: self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e), new_context=new_context, cur_context=cur_context, input_was=context) if rc != 0: self.fail_json(path=path, msg='set selinux context failed') changed = True return changed def set_owner_if_different(self, path, owner, changed, diff=None, expand=True): if owner is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True orig_uid, orig_gid = self.user_and_group(b_path, expand) try: uid = int(owner) except ValueError: try: uid = pwd.getpwnam(owner).pw_uid except KeyError: path = to_text(b_path) self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner) if orig_uid != uid: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['owner'] = orig_uid if 'after' not in diff: diff['after'] = {} diff['after']['owner'] = uid if self.check_mode: return True try: os.lchown(b_path, uid, -1) except (IOError, OSError) as e: path = to_text(b_path) self.fail_json(path=path, msg='chown failed: %s' % (to_text(e))) changed = True return changed def set_group_if_different(self, path, group, changed, diff=None, expand=True): if group is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True orig_uid, orig_gid = self.user_and_group(b_path, expand) try: gid = int(group) except ValueError: try: gid = grp.getgrnam(group).gr_gid except KeyError: path = to_text(b_path) self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group) if orig_gid != gid: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['group'] = orig_gid if 'after' not in diff: diff['after'] = {} diff['after']['group'] = gid if self.check_mode: return True try: os.lchown(b_path, -1, gid) except OSError: path = to_text(b_path) self.fail_json(path=path, msg='chgrp failed') changed = True return changed def set_mode_if_different(self, path, mode, changed, diff=None, expand=True): if mode is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True path_stat = os.lstat(b_path) if not isinstance(mode, int): try: mode = int(mode, 8) except Exception: try: mode = self._symbolic_mode_to_octal(path_stat, mode) except Exception as e: path = to_text(b_path) self.fail_json(path=path, msg="mode must be in octal or symbolic form", details=to_native(e)) if mode != stat.S_IMODE(mode): # prevent mode from having extra info orbeing invalid long number path = to_text(b_path) self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode) prev_mode = stat.S_IMODE(path_stat.st_mode) if prev_mode != mode: if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['mode'] = '0%03o' % prev_mode if 'after' not in diff: diff['after'] = {} diff['after']['mode'] = '0%03o' % mode if self.check_mode: return True # FIXME: comparison against string above will cause this to be executed # every time try: if hasattr(os, 'lchmod'): os.lchmod(b_path, mode) else: if not os.path.islink(b_path): os.chmod(b_path, mode) else: # Attempt to set the perms of the symlink but be # careful not to change the perms of the underlying # file while trying underlying_stat = os.stat(b_path) os.chmod(b_path, mode) new_underlying_stat = os.stat(b_path) if underlying_stat.st_mode != new_underlying_stat.st_mode: os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode)) except OSError as e: if os.path.islink(b_path) and e.errno in ( errno.EACCES, # can't access symlink in sticky directory (stat) errno.EPERM, # can't set mode on symbolic links (chmod) errno.EROFS, # can't set mode on read-only filesystem ): pass elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links pass else: raise except Exception as e: path = to_text(b_path) self.fail_json(path=path, msg='chmod failed', details=to_native(e), exception=traceback.format_exc()) path_stat = os.lstat(b_path) new_mode = stat.S_IMODE(path_stat.st_mode) if new_mode != prev_mode: changed = True return changed def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True): if attributes is None: return changed b_path = to_bytes(path, errors='surrogate_or_strict') if expand: b_path = os.path.expanduser(os.path.expandvars(b_path)) if self.check_file_absent_if_check_mode(b_path): return True existing = self.get_file_attributes(b_path, include_version=False) attr_mod = '=' if attributes.startswith(('-', '+')): attr_mod = attributes[0] attributes = attributes[1:] if existing.get('attr_flags', '') != attributes or attr_mod == '-': attrcmd = self.get_bin_path('chattr') if attrcmd: attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path] changed = True if diff is not None: if 'before' not in diff: diff['before'] = {} diff['before']['attributes'] = existing.get('attr_flags') if 'after' not in diff: diff['after'] = {} diff['after']['attributes'] = '%s%s' % (attr_mod, attributes) if not self.check_mode: try: rc, out, err = self.run_command(attrcmd) if rc != 0 or err: raise Exception("Error while setting attributes: %s" % (out + err)) except Exception as e: self.fail_json(path=to_text(b_path), msg='chattr failed', details=to_native(e), exception=traceback.format_exc()) return changed def get_file_attributes(self, path, include_version=True): output = {} attrcmd = self.get_bin_path('lsattr', False) if attrcmd: flags = '-vd' if include_version else '-d' attrcmd = [attrcmd, flags, path] try: rc, out, err = self.run_command(attrcmd) if rc == 0: res = out.split() attr_flags_idx = 0 if include_version: attr_flags_idx = 1 output['version'] = res[0].strip() output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip() output['attributes'] = format_attributes(output['attr_flags']) except Exception: pass return output @classmethod def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode): """ This enables symbolic chmod string parsing as stated in the chmod man-page This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X" """ new_mode = stat.S_IMODE(path_stat.st_mode) # Now parse all symbolic modes for mode in symbolic_mode.split(','): # Per single mode. This always contains a '+', '-' or '=' # Split it on that permlist = MODE_OPERATOR_RE.split(mode) # And find all the operators opers = MODE_OPERATOR_RE.findall(mode) # The user(s) where it's all about is the first element in the # 'permlist' list. Take that and remove it from the list. # An empty user or 'a' means 'all'. users = permlist.pop(0) use_umask = (users == '') if users == 'a' or users == '': users = 'ugo' # Check if there are illegal characters in the user list # They can end up in 'users' because they are not split if USERS_RE.match(users): raise ValueError("bad symbolic permission for mode: %s" % mode) # Now we have two list of equal length, one contains the requested # permissions and one with the corresponding operators. for idx, perms in enumerate(permlist): # Check if there are illegal characters in the permissions if PERMS_RE.match(perms): raise ValueError("bad symbolic permission for mode: %s" % mode) for user in users: mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask) new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode) return new_mode @staticmethod def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode): if operator == '=': if user == 'u': mask = stat.S_IRWXU | stat.S_ISUID elif user == 'g': mask = stat.S_IRWXG | stat.S_ISGID elif user == 'o': mask = stat.S_IRWXO | stat.S_ISVTX # mask out u, g, or o permissions from current_mode and apply new permissions inverse_mask = mask ^ PERM_BITS new_mode = (current_mode & inverse_mask) | mode_to_apply elif operator == '+': new_mode = current_mode | mode_to_apply elif operator == '-': new_mode = current_mode - (current_mode & mode_to_apply) return new_mode @staticmethod def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask): prev_mode = stat.S_IMODE(path_stat.st_mode) is_directory = stat.S_ISDIR(path_stat.st_mode) has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0 apply_X_permission = is_directory or has_x_permissions # Get the umask, if the 'user' part is empty, the effect is as if (a) were # given, but bits that are set in the umask are not affected. # We also need the "reversed umask" for masking umask = os.umask(0) os.umask(umask) rev_umask = umask ^ PERM_BITS # Permission bits constants documented at: # http://docs.python.org/2/library/stat.html#stat.S_ISUID if apply_X_permission: X_perms = { 'u': {'X': stat.S_IXUSR}, 'g': {'X': stat.S_IXGRP}, 'o': {'X': stat.S_IXOTH}, } else: X_perms = { 'u': {'X': 0}, 'g': {'X': 0}, 'o': {'X': 0}, } user_perms_to_modes = { 'u': { 'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR, 'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR, 'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR, 's': stat.S_ISUID, 't': 0, 'u': prev_mode & stat.S_IRWXU, 'g': (prev_mode & stat.S_IRWXG) << 3, 'o': (prev_mode & stat.S_IRWXO) << 6}, 'g': { 'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP, 'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP, 'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP, 's': stat.S_ISGID, 't': 0, 'u': (prev_mode & stat.S_IRWXU) >> 3, 'g': prev_mode & stat.S_IRWXG, 'o': (prev_mode & stat.S_IRWXO) << 3}, 'o': { 'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH, 'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH, 'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH, 's': 0, 't': stat.S_ISVTX, 'u': (prev_mode & stat.S_IRWXU) >> 6, 'g': (prev_mode & stat.S_IRWXG) >> 3, 'o': prev_mode & stat.S_IRWXO}, } # Insert X_perms into user_perms_to_modes for key, value in X_perms.items(): user_perms_to_modes[key].update(value) def or_reduce(mode, perm): return mode | user_perms_to_modes[user][perm] return reduce(or_reduce, perms, 0) def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True): # set modes owners and context as needed changed = self.set_context_if_different( file_args['path'], file_args['secontext'], changed, diff ) changed = self.set_owner_if_different( file_args['path'], file_args['owner'], changed, diff, expand ) changed = self.set_group_if_different( file_args['path'], file_args['group'], changed, diff, expand ) changed = self.set_mode_if_different( file_args['path'], file_args['mode'], changed, diff, expand ) changed = self.set_attributes_if_different( file_args['path'], file_args['attributes'], changed, diff, expand ) return changed def check_file_absent_if_check_mode(self, file_path): return self.check_mode and not os.path.exists(file_path) def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True): return self.set_fs_attributes_if_different(file_args, changed, diff, expand) def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True): return self.set_fs_attributes_if_different(file_args, changed, diff, expand) def add_path_info(self, kwargs): ''' for results that are files, supplement the info about the file in the return path with stats about the file path. ''' path = kwargs.get('path', kwargs.get('dest', None)) if path is None: return kwargs b_path = to_bytes(path, errors='surrogate_or_strict') if os.path.exists(b_path): (uid, gid) = self.user_and_group(path) kwargs['uid'] = uid kwargs['gid'] = gid try: user = pwd.getpwuid(uid)[0] except KeyError: user = str(uid) try: group = grp.getgrgid(gid)[0] except KeyError: group = str(gid) kwargs['owner'] = user kwargs['group'] = group st = os.lstat(b_path) kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE]) # secontext not yet supported if os.path.islink(b_path): kwargs['state'] = 'link' elif os.path.isdir(b_path): kwargs['state'] = 'directory' elif os.stat(b_path).st_nlink > 1: kwargs['state'] = 'hard' else: kwargs['state'] = 'file' if self.selinux_enabled(): kwargs['secontext'] = ':'.join(self.selinux_context(path)) kwargs['size'] = st[stat.ST_SIZE] return kwargs def _check_locale(self): ''' Uses the locale module to test the currently set locale (per the LANG and LC_CTYPE environment settings) ''' try: # setting the locale to '' uses the default locale # as it would be returned by locale.getdefaultlocale() locale.setlocale(locale.LC_ALL, '') except locale.Error: # fallback to the 'C' locale, which may cause unicode # issues but is preferable to simply failing because # of an unknown locale locale.setlocale(locale.LC_ALL, 'C') os.environ['LANG'] = 'C' os.environ['LC_ALL'] = 'C' os.environ['LC_MESSAGES'] = 'C' except Exception as e: self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" % to_native(e), exception=traceback.format_exc()) def _set_internal_properties(self, argument_spec=None, module_parameters=None): if argument_spec is None: argument_spec = self.argument_spec if module_parameters is None: module_parameters = self.params for k in PASS_VARS: # handle setting internal properties from internal ansible vars param_key = '_ansible_%s' % k if param_key in module_parameters: if k in PASS_BOOLS: setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key])) else: setattr(self, PASS_VARS[k][0], module_parameters[param_key]) # clean up internal top level params: if param_key in self.params: del self.params[param_key] else: # use defaults if not already set if not hasattr(self, PASS_VARS[k][0]): setattr(self, PASS_VARS[k][0], PASS_VARS[k][1]) def safe_eval(self, value, locals=None, include_exceptions=False): return safe_eval(value, locals, include_exceptions) def _load_params(self): ''' read the input and set the params attribute. This method is for backwards compatibility. The guts of the function were moved out in 2.1 so that custom modules could read the parameters. ''' # debug overrides to read args from file or cmdline self.params = _load_params() def _log_to_syslog(self, msg): if HAS_SYSLOG: try: module = 'ansible-%s' % self._name facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER) syslog.openlog(str(module), 0, facility) syslog.syslog(syslog.LOG_INFO, msg) except TypeError as e: self.fail_json( msg='Failed to log to syslog (%s). To proceed anyway, ' 'disable syslog logging by setting no_target_syslog ' 'to True in your Ansible config.' % to_native(e), exception=traceback.format_exc(), msg_to_log=msg, ) def debug(self, msg): if self._debug: self.log('[debug] %s' % msg) def log(self, msg, log_args=None): if not self.no_log: if log_args is None: log_args = dict() module = 'ansible-%s' % self._name if isinstance(module, binary_type): module = module.decode('utf-8', 'replace') # 6655 - allow for accented characters if not isinstance(msg, (binary_type, text_type)): raise TypeError("msg should be a string (got %s)" % type(msg)) # We want journal to always take text type # syslog takes bytes on py2, text type on py3 if isinstance(msg, binary_type): journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values) else: # TODO: surrogateescape is a danger here on Py3 journal_msg = remove_values(msg, self.no_log_values) if PY3: syslog_msg = journal_msg else: syslog_msg = journal_msg.encode('utf-8', 'replace') if has_journal: journal_args = [("MODULE", os.path.basename(__file__))] for arg in log_args: journal_args.append((arg.upper(), str(log_args[arg]))) try: if HAS_SYSLOG: # If syslog_facility specified, it needs to convert # from the facility name to the facility code, and # set it as SYSLOG_FACILITY argument of journal.send() facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER) >> 3 journal.send(MESSAGE=u"%s %s" % (module, journal_msg), SYSLOG_FACILITY=facility, **dict(journal_args)) else: journal.send(MESSAGE=u"%s %s" % (module, journal_msg), **dict(journal_args)) except IOError: # fall back to syslog since logging to journal failed self._log_to_syslog(syslog_msg) else: self._log_to_syslog(syslog_msg) def _log_invocation(self): ''' log that ansible ran the module ''' # TODO: generalize a separate log function and make log_invocation use it # Sanitize possible password argument when logging. log_args = dict() for param in self.params: canon = self.aliases.get(param, param) arg_opts = self.argument_spec.get(canon, {}) no_log = arg_opts.get('no_log', None) # try to proactively capture password/passphrase fields if no_log is None and PASSWORD_MATCH.search(param): log_args[param] = 'NOT_LOGGING_PASSWORD' self.warn('Module did not set no_log for %s' % param) elif self.boolean(no_log): log_args[param] = 'NOT_LOGGING_PARAMETER' else: param_val = self.params[param] if not isinstance(param_val, (text_type, binary_type)): param_val = str(param_val) elif isinstance(param_val, text_type): param_val = param_val.encode('utf-8') log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values) msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()] if msg: msg = 'Invoked with %s' % ' '.join(msg) else: msg = 'Invoked' self.log(msg, log_args=log_args) def _set_cwd(self): try: cwd = os.getcwd() if not os.access(cwd, os.F_OK | os.R_OK): raise Exception() return cwd except Exception: # we don't have access to the cwd, probably because of sudo. # Try and move to a neutral location to prevent errors for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]: try: if os.access(cwd, os.F_OK | os.R_OK): os.chdir(cwd) return cwd except Exception: pass # we won't error here, as it may *not* be a problem, # and we don't want to break modules unnecessarily return None def get_bin_path(self, arg, required=False, opt_dirs=None): ''' Find system executable in PATH. :param arg: The executable to find. :param required: if executable is not found and required is ``True``, fail_json :param opt_dirs: optional list of directories to search in addition to ``PATH`` :returns: if found return full path; otherwise return None ''' bin_path = None try: bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs) except ValueError as e: if required: self.fail_json(msg=to_text(e)) else: return bin_path return bin_path def boolean(self, arg): '''Convert the argument to a boolean''' if arg is None: return arg try: return boolean(arg) except TypeError as e: self.fail_json(msg=to_native(e)) def jsonify(self, data): try: return jsonify(data) except UnicodeError as e: self.fail_json(msg=to_text(e)) def from_json(self, data): return json.loads(data) def add_cleanup_file(self, path): if path not in self.cleanup_files: self.cleanup_files.append(path) def do_cleanup_files(self): for path in self.cleanup_files: self.cleanup(path) def _return_formatted(self, kwargs): self.add_path_info(kwargs) if 'invocation' not in kwargs: kwargs['invocation'] = {'module_args': self.params} if 'warnings' in kwargs: if isinstance(kwargs['warnings'], list): for w in kwargs['warnings']: self.warn(w) else: self.warn(kwargs['warnings']) warnings = get_warning_messages() if warnings: kwargs['warnings'] = warnings if 'deprecations' in kwargs: if isinstance(kwargs['deprecations'], list): for d in kwargs['deprecations']: if isinstance(d, SEQUENCETYPE) and len(d) == 2: self.deprecate(d[0], version=d[1]) elif isinstance(d, Mapping): self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'), collection_name=d.get('collection_name')) else: self.deprecate(d) # pylint: disable=ansible-deprecated-no-version else: self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version deprecations = get_deprecation_messages() if deprecations: kwargs['deprecations'] = deprecations kwargs = remove_values(kwargs, self.no_log_values) print('\n%s' % self.jsonify(kwargs)) def exit_json(self, **kwargs): ''' return from the module, without error ''' self.do_cleanup_files() self._return_formatted(kwargs) sys.exit(0) def fail_json(self, msg, **kwargs): ''' return from the module, with an error message ''' kwargs['failed'] = True kwargs['msg'] = msg # Add traceback if debug or high verbosity and it is missing # NOTE: Badly named as exception, it really always has been a traceback if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3): if PY2: # On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\ ''.join(traceback.format_tb(sys.exc_info()[2])) else: kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2])) self.do_cleanup_files() self._return_formatted(kwargs) sys.exit(1) def fail_on_missing_params(self, required_params=None): if not required_params: return try: check_missing_parameters(self.params, required_params) except TypeError as e: self.fail_json(msg=to_native(e)) def digest_from_file(self, filename, algorithm): ''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. ''' b_filename = to_bytes(filename, errors='surrogate_or_strict') if not os.path.exists(b_filename): return None if os.path.isdir(b_filename): self.fail_json(msg="attempted to take checksum of directory: %s" % filename) # preserve old behaviour where the third parameter was a hash algorithm object if hasattr(algorithm, 'hexdigest'): digest_method = algorithm else: try: digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]() except KeyError: self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" % (filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS))) blocksize = 64 * 1024 infile = open(os.path.realpath(b_filename), 'rb') block = infile.read(blocksize) while block: digest_method.update(block) block = infile.read(blocksize) infile.close() return digest_method.hexdigest() def md5(self, filename): ''' Return MD5 hex digest of local file using digest_from_file(). Do not use this function unless you have no other choice for: 1) Optional backwards compatibility 2) Compatibility with a third party protocol This function will not work on systems complying with FIPS-140-2. Most uses of this function can use the module.sha1 function instead. ''' if 'md5' not in AVAILABLE_HASH_ALGORITHMS: raise ValueError('MD5 not available. Possibly running in FIPS mode') return self.digest_from_file(filename, 'md5') def sha1(self, filename): ''' Return SHA1 hex digest of local file using digest_from_file(). ''' return self.digest_from_file(filename, 'sha1') def sha256(self, filename): ''' Return SHA-256 hex digest of local file using digest_from_file(). ''' return self.digest_from_file(filename, 'sha256') def backup_local(self, fn): '''make a date-marked backup of the specified file, return True or False on success or failure''' backupdest = '' if os.path.exists(fn): # backups named basename.PID.YYYY-MM-DD@HH:MM:SS~ ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time())) backupdest = '%s.%s.%s' % (fn, os.getpid(), ext) try: self.preserved_copy(fn, backupdest) except (shutil.Error, IOError) as e: self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e))) return backupdest def cleanup(self, tmpfile): if os.path.exists(tmpfile): try: os.unlink(tmpfile) except OSError as e: sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e))) def preserved_copy(self, src, dest): """Copy a file with preserved ownership, permissions and context""" # shutil.copy2(src, dst) # Similar to shutil.copy(), but metadata is copied as well - in fact, # this is just shutil.copy() followed by copystat(). This is similar # to the Unix command cp -p. # # shutil.copystat(src, dst) # Copy the permission bits, last access time, last modification time, # and flags from src to dst. The file contents, owner, and group are # unaffected. src and dst are path names given as strings. shutil.copy2(src, dest) # Set the context if self.selinux_enabled(): context = self.selinux_context(src) self.set_context_if_different(dest, context, False) # chown it try: dest_stat = os.stat(src) tmp_stat = os.stat(dest) if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid): os.chown(dest, dest_stat.st_uid, dest_stat.st_gid) except OSError as e: if e.errno != errno.EPERM: raise # Set the attributes current_attribs = self.get_file_attributes(src, include_version=False) current_attribs = current_attribs.get('attr_flags', '') self.set_attributes_if_different(dest, current_attribs, True) def atomic_move(self, src, dest, unsafe_writes=False): '''atomically move src to dest, copying attributes from dest, returns true on success it uses os.rename to ensure this as it is an atomic operation, rest of the function is to work around limitations, corner cases and ensure selinux context is saved if possible''' context = None dest_stat = None b_src = to_bytes(src, errors='surrogate_or_strict') b_dest = to_bytes(dest, errors='surrogate_or_strict') if os.path.exists(b_dest): try: dest_stat = os.stat(b_dest) # copy mode and ownership os.chmod(b_src, dest_stat.st_mode & PERM_BITS) os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid) # try to copy flags if possible if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'): try: os.chflags(b_src, dest_stat.st_flags) except OSError as e: for err in 'EOPNOTSUPP', 'ENOTSUP': if hasattr(errno, err) and e.errno == getattr(errno, err): break else: raise except OSError as e: if e.errno != errno.EPERM: raise if self.selinux_enabled(): context = self.selinux_context(dest) else: if self.selinux_enabled(): context = self.selinux_default_context(dest) creating = not os.path.exists(b_dest) try: # Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic. os.rename(b_src, b_dest) except (IOError, OSError) as e: if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]: # only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied) # and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc()) else: # Use bytes here. In the shippable CI, this fails with # a UnicodeError with surrogateescape'd strings for an unknown # reason (doesn't happen in a local Ubuntu16.04 VM) b_dest_dir = os.path.dirname(b_dest) b_suffix = os.path.basename(b_dest) error_msg = None tmp_dest_name = None try: tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix) except (OSError, IOError) as e: error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e)) except TypeError: # We expect that this is happening because python3.4.x and # below can't handle byte strings in mkstemp(). # Traceback would end in something like: # file = _os.path.join(dir, pre + name + suf) # TypeError: can't concat bytes to str error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. ' 'Please use Python2.x or Python3.5 or greater.') finally: if error_msg: if unsafe_writes: self._unsafe_writes(b_src, b_dest) else: self.fail_json(msg=error_msg, exception=traceback.format_exc()) if tmp_dest_name: b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict') try: try: # close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host) os.close(tmp_dest_fd) # leaves tmp file behind when sudo and not root try: shutil.move(b_src, b_tmp_dest_name) except OSError: # cleanup will happen by 'rm' of tmpdir # copy2 will preserve some metadata shutil.copy2(b_src, b_tmp_dest_name) if self.selinux_enabled(): self.set_context_if_different( b_tmp_dest_name, context, False) try: tmp_stat = os.stat(b_tmp_dest_name) if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid): os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid) except OSError as e: if e.errno != errno.EPERM: raise try: os.rename(b_tmp_dest_name, b_dest) except (shutil.Error, OSError, IOError) as e: if unsafe_writes and e.errno == errno.EBUSY: self._unsafe_writes(b_tmp_dest_name, b_dest) else: self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' % (src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc()) except (shutil.Error, OSError, IOError) as e: if unsafe_writes: self._unsafe_writes(b_src, b_dest) else: self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc()) finally: self.cleanup(b_tmp_dest_name) if creating: # make sure the file has the correct permissions # based on the current value of umask umask = os.umask(0) os.umask(umask) os.chmod(b_dest, DEFAULT_PERM & ~umask) try: os.chown(b_dest, os.geteuid(), os.getegid()) except OSError: # We're okay with trying our best here. If the user is not # root (or old Unices) they won't be able to chown. pass if self.selinux_enabled(): # rename might not preserve context self.set_context_if_different(dest, context, False) def _unsafe_writes(self, src, dest): # sadly there are some situations where we cannot ensure atomicity, but only if # the user insists and we get the appropriate error we update the file unsafely try: out_dest = in_src = None try: out_dest = open(dest, 'wb') in_src = open(src, 'rb') shutil.copyfileobj(in_src, out_dest) finally: # assuring closed files in 2.4 compatible way if out_dest: out_dest.close() if in_src: in_src.close() except (shutil.Error, OSError, IOError) as e: self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)), exception=traceback.format_exc()) def _clean_args(self, args): if not self._clean: # create a printable version of the command for use in reporting later, # which strips out things like passwords from the args list to_clean_args = args if PY2: if isinstance(args, text_type): to_clean_args = to_bytes(args) else: if isinstance(args, binary_type): to_clean_args = to_text(args) if isinstance(args, (text_type, binary_type)): to_clean_args = shlex.split(to_clean_args) clean_args = [] is_passwd = False for arg in (to_native(a) for a in to_clean_args): if is_passwd: is_passwd = False clean_args.append('********') continue if PASSWD_ARG_RE.match(arg): sep_idx = arg.find('=') if sep_idx > -1: clean_args.append('%s=********' % arg[:sep_idx]) continue else: is_passwd = True arg = heuristic_log_sanitize(arg, self.no_log_values) clean_args.append(arg) self._clean = ' '.join(shlex_quote(arg) for arg in clean_args) return self._clean def _restore_signal_handlers(self): # Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses. if PY2 and sys.platform != 'win32': signal.signal(signal.SIGPIPE, signal.SIG_DFL) def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict', expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True): ''' Execute a command, returns rc, stdout, and stderr. :arg args: is the command to run * If args is a list, the command will be run with shell=False. * If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False * If args is a string and use_unsafe_shell=True it runs with shell=True. :kw check_rc: Whether to call fail_json in case of non zero RC. Default False :kw close_fds: See documentation for subprocess.Popen(). Default True :kw executable: See documentation for subprocess.Popen(). Default None :kw data: If given, information to write to the stdin of the command :kw binary_data: If False, append a newline to the data. Default False :kw path_prefix: If given, additional path to find the command in. This adds to the PATH environment variable so helper commands in the same directory can also be found :kw cwd: If given, working directory to run the command inside :kw use_unsafe_shell: See `args` parameter. Default False :kw prompt_regex: Regex string (not a compiled regex) which can be used to detect prompts in the stdout which would otherwise cause the execution to hang (especially if no input data is specified) :kw environ_update: dictionary to *update* os.environ with :kw umask: Umask to be used when running the command. Default None :kw encoding: Since we return native strings, on python3 we need to know the encoding to use to transform from bytes to text. If you want to always get bytes back, use encoding=None. The default is "utf-8". This does not affect transformation of strings given as args. :kw errors: Since we return native strings, on python3 we need to transform stdout and stderr from bytes to text. If the bytes are undecodable in the ``encoding`` specified, then use this error handler to deal with them. The default is ``surrogate_or_strict`` which means that the bytes will be decoded using the surrogateescape error handler if available (available on all python3 versions we support) otherwise a UnicodeError traceback will be raised. This does not affect transformations of strings given as args. :kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument dictates whether ``~`` is expanded in paths and environment variables are expanded before running the command. When ``True`` a string such as ``$SHELL`` will be expanded regardless of escaping. When ``False`` and ``use_unsafe_shell=False`` no path or variable expansion will be done. :kw pass_fds: When running on Python 3 this argument dictates which file descriptors should be passed to an underlying ``Popen`` constructor. On Python 2, this will set ``close_fds`` to False. :kw before_communicate_callback: This function will be called after ``Popen`` object will be created but before communicating to the process. (``Popen`` object will be passed to callback as a first argument) :kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd`` (non-existent or not a directory) should be ignored or should raise an exception. :returns: A 3-tuple of return code (integer), stdout (native string), and stderr (native string). On python2, stdout and stderr are both byte strings. On python3, stdout and stderr are text strings converted according to the encoding and errors parameters. If you want byte strings on python3, use encoding=None to turn decoding to text off. ''' # used by clean args later on self._clean = None if not isinstance(args, (list, binary_type, text_type)): msg = "Argument 'args' to run_command must be list or string" self.fail_json(rc=257, cmd=args, msg=msg) shell = False if use_unsafe_shell: # stringify args for unsafe/direct shell usage if isinstance(args, list): args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args]) else: args = to_bytes(args, errors='surrogate_or_strict') # not set explicitly, check if set by controller if executable: executable = to_bytes(executable, errors='surrogate_or_strict') args = [executable, b'-c', args] elif self._shell not in (None, '/bin/sh'): args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args] else: shell = True else: # ensure args are a list if isinstance(args, (binary_type, text_type)): # On python2.6 and below, shlex has problems with text type # On python3, shlex needs a text type. if PY2: args = to_bytes(args, errors='surrogate_or_strict') elif PY3: args = to_text(args, errors='surrogateescape') args = shlex.split(args) # expand ``~`` in paths, and all environment vars if expand_user_and_vars: args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None] else: args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None] prompt_re = None if prompt_regex: if isinstance(prompt_regex, text_type): if PY3: prompt_regex = to_bytes(prompt_regex, errors='surrogateescape') elif PY2: prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict') try: prompt_re = re.compile(prompt_regex, re.MULTILINE) except re.error: self.fail_json(msg="invalid prompt regular expression given to run_command") rc = 0 msg = None st_in = None # Manipulate the environ we'll send to the new process old_env_vals = {} # We can set this from both an attribute and per call for key, val in self.run_command_environ_update.items(): old_env_vals[key] = os.environ.get(key, None) os.environ[key] = val if environ_update: for key, val in environ_update.items(): old_env_vals[key] = os.environ.get(key, None) os.environ[key] = val if path_prefix: path = os.environ.get('PATH', '') old_env_vals['PATH'] = path if path: os.environ['PATH'] = "%s:%s" % (path_prefix, path) else: os.environ['PATH'] = path_prefix # If using test-module.py and explode, the remote lib path will resemble: # /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py # If using ansible or ansible-playbook with a remote system: # /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py # Clean out python paths set by ansiballz if 'PYTHONPATH' in os.environ: pypaths = os.environ['PYTHONPATH'].split(':') pypaths = [x for x in pypaths if not x.endswith('/ansible_modlib.zip') and not x.endswith('/debug_dir')] os.environ['PYTHONPATH'] = ':'.join(pypaths) if not os.environ['PYTHONPATH']: del os.environ['PYTHONPATH'] if data: st_in = subprocess.PIPE kwargs = dict( executable=executable, shell=shell, close_fds=close_fds, stdin=st_in, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=self._restore_signal_handlers, ) if PY3 and pass_fds: kwargs["pass_fds"] = pass_fds elif PY2 and pass_fds: kwargs['close_fds'] = False # store the pwd prev_dir = os.getcwd() # make sure we're in the right working directory if cwd: if os.path.isdir(cwd): cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict') kwargs['cwd'] = cwd try: os.chdir(cwd) except (OSError, IOError) as e: self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)), exception=traceback.format_exc()) elif not ignore_invalid_cwd: self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd) old_umask = None if umask: old_umask = os.umask(umask) try: if self._debug: self.log('Executing: ' + self._clean_args(args)) cmd = subprocess.Popen(args, **kwargs) if before_communicate_callback: before_communicate_callback(cmd) # the communication logic here is essentially taken from that # of the _communicate() function in ssh.py stdout = b'' stderr = b'' try: selector = selectors.DefaultSelector() except (IOError, OSError): # Failed to detect default selector for the given platform # Select PollSelector which is supported by major platforms selector = selectors.PollSelector() selector.register(cmd.stdout, selectors.EVENT_READ) selector.register(cmd.stderr, selectors.EVENT_READ) if os.name == 'posix': fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK) fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK) if data: if not binary_data: data += '\n' if isinstance(data, text_type): data = to_bytes(data) cmd.stdin.write(data) cmd.stdin.close() while True: events = selector.select(1) for key, event in events: b_chunk = key.fileobj.read() if b_chunk == b(''): selector.unregister(key.fileobj) if key.fileobj == cmd.stdout: stdout += b_chunk elif key.fileobj == cmd.stderr: stderr += b_chunk # if we're checking for prompts, do it now if prompt_re: if prompt_re.search(stdout) and not data: if encoding: stdout = to_native(stdout, encoding=encoding, errors=errors) return (257, stdout, "A prompt was encountered while running a command, but no input data was specified") # only break out if no pipes are left to read or # the pipes are completely read and # the process is terminated if (not events or not selector.get_map()) and cmd.poll() is not None: break # No pipes are left to read but process is not yet terminated # Only then it is safe to wait for the process to be finished # NOTE: Actually cmd.poll() is always None here if no selectors are left elif not selector.get_map() and cmd.poll() is None: cmd.wait() # The process is terminated. Since no pipes to read from are # left, there is no need to call select() again. break cmd.stdout.close() cmd.stderr.close() selector.close() rc = cmd.returncode except (OSError, IOError) as e: self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e))) self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args)) except Exception as e: self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc()))) self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args)) # Restore env settings for key, val in old_env_vals.items(): if val is None: del os.environ[key] else: os.environ[key] = val if old_umask: os.umask(old_umask) if rc != 0 and check_rc: msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values) self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg) # reset the pwd os.chdir(prev_dir) if encoding is not None: return (rc, to_native(stdout, encoding=encoding, errors=errors), to_native(stderr, encoding=encoding, errors=errors)) return (rc, stdout, stderr) def append_to_file(self, filename, str): filename = os.path.expandvars(os.path.expanduser(filename)) fh = open(filename, 'a') fh.write(str) fh.close() def bytes_to_human(self, size): return bytes_to_human(size) # for backwards compatibility pretty_bytes = bytes_to_human def human_to_bytes(self, number, isbits=False): return human_to_bytes(number, isbits) # # Backwards compat # # In 2.0, moved from inside the module to the toplevel is_executable = is_executable @staticmethod def get_buffer_size(fd): try: # 1032 == FZ_GETPIPE_SZ buffer_size = fcntl.fcntl(fd, 1032) except Exception: try: # not as exact as above, but should be good enough for most platforms that fail the previous call buffer_size = select.PIPE_BUF except Exception: buffer_size = 9000 # use sane default JIC return buffer_size def get_module_path(): return os.path.dirname(os.path.realpath(__file__))
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
changelogs/fragments/74601-ini-lookup-handle-errors.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
lib/ansible/plugins/lookup/ini.py
# (c) 2015, Yannig Perre <yannig.perre(at)gmail.com> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ name: ini author: Yannig Perre (!UNKNOWN) <yannig.perre(at)gmail.com> version_added: "2.0" short_description: read data from a ini file description: - "The ini lookup reads the contents of a file in INI format C(key1=value1). This plugin retrieves the value on the right side after the equal sign C('=') of a given section C([section])." - "You can also read a property file which - in this case - does not contain section." options: _terms: description: The key(s) to look up required: True type: description: Type of the file. 'properties' refers to the Java properties files. default: 'ini' choices: ['ini', 'properties'] file: description: Name of the file to load. default: 'ansible.ini' section: default: global description: Section where to lookup the key. re: default: False type: boolean description: Flag to indicate if the key supplied is a regexp. encoding: default: utf-8 description: Text encoding to use. default: description: Return value if the key is not in the ini file. default: '' """ EXAMPLES = """ - debug: msg="User in integration is {{ lookup('ini', 'user', section='integration', file='users.ini') }}" - debug: msg="User in production is {{ lookup('ini', 'user', section='production', file='users.ini') }}" - debug: msg="user.name is {{ lookup('ini', 'user.name', type='properties', file='user.properties') }}" - debug: msg: "{{ item }}" loop: "{{q('ini', '.*', section='section1', file='test.ini', re=True)}}" """ RETURN = """ _raw: description: - value(s) of the key(s) in the ini file type: list elements: str """ import os import re from io import StringIO from collections import defaultdict from ansible.errors import AnsibleLookupError, AnsibleOptionsError from ansible.module_utils.six.moves import configparser from ansible.module_utils._text import to_bytes, to_text, to_native from ansible.module_utils.common._collections_compat import MutableSequence from ansible.plugins.lookup import LookupBase def _parse_params(term, paramvals): '''Safely split parameter term to preserve spaces''' # TODO: deprecate this method valid_keys = paramvals.keys() params = defaultdict(lambda: '') # TODO: check kv_parser to see if it can handle spaces this same way keys = [] thiskey = 'key' # initialize for 'lookup item' for idp, phrase in enumerate(term.split()): # update current key if used if '=' in phrase: for k in valid_keys: if ('%s=' % k) in phrase: thiskey = k # if first term or key does not exist if idp == 0 or not params[thiskey]: params[thiskey] = phrase keys.append(thiskey) else: # append to existing key params[thiskey] += ' ' + phrase # return list of values return [params[x] for x in keys] class LookupModule(LookupBase): def get_value(self, key, section, dflt, is_regexp): # Retrieve all values from a section using a regexp if is_regexp: return [v for k, v in self.cp.items(section) if re.match(key, k)] value = None # Retrieve a single value try: value = self.cp.get(section, key) except configparser.NoOptionError: return dflt return value def run(self, terms, variables=None, **kwargs): self.set_options(var_options=variables, direct=kwargs) paramvals = self.get_options() self.cp = configparser.ConfigParser() ret = [] for term in terms: key = term # parameters specified? if '=' in term or ' ' in term.strip(): self._deprecate_inline_kv() params = _parse_params(term, paramvals) try: updated_key = False for param in params: if '=' in param: name, value = param.split('=') if name not in paramvals: raise AnsibleLookupError('%s is not a valid option.' % name) paramvals[name] = value elif key == term: # only take first, this format never supported multiple keys inline key = param updated_key = True except ValueError as e: # bad params passed raise AnsibleLookupError("Could not use '%s' from '%s': %s" % (param, params, to_native(e)), orig_exc=e) if not updated_key: raise AnsibleOptionsError("No key to lookup was provided as first term with in string inline options: %s" % term) # only passed options in inline string # TODO: look to use cache to avoid redoing this for every term if they use same file # Retrieve file path path = self.find_file_in_search_path(variables, 'files', paramvals['file']) # Create StringIO later used to parse ini config = StringIO() # Special case for java properties if paramvals['type'] == "properties": config.write(u'[java_properties]\n') paramvals['section'] = 'java_properties' # Open file using encoding contents, show_data = self._loader._get_file_contents(path) contents = to_text(contents, errors='surrogate_or_strict', encoding=paramvals['encoding']) config.write(contents) config.seek(0, os.SEEK_SET) self.cp.readfp(config) var = self.get_value(key, paramvals['section'], paramvals['default'], paramvals['re']) if var is not None: if isinstance(var, MutableSequence): for v in var: ret.append(v) else: ret.append(var) return ret
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/duplicate.ini
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/duplicate_case_check.ini
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/inventory
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/runme.sh
#!/usr/bin/env bash set -eux ansible-playbook test_lookup_properties.yml -i ../../inventory -v "$@"
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/test_errors.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,601
Case sensitive ini lookup option
### Summary When doing a lookup on a java properties file, case sensitivity is being ignored. Since java properties can be used case sensitive this RFE is to have the option to allow a case sensitive lookup on a properties file. test.properties ``` fubar=”lower” FUBAR=”upper” ``` ini_prop_test.yml ```yaml --- - hosts: localhost connection: local gather_facts: false vars: prop_item: "{{ lookup('ini', 'fubar file=test.properties type=properties' )}}" tasks: - debug: var: prop_item - debug: msg: "{{ item }}" with_ini: - '.* file=test.properties type=properties re=yes' ... ``` Ansible 2.7.9 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* ok: [localhost] => { "prop_item": "”upper”" } TASK [debug] ******************************************************************************************* ok: [localhost] => (item=”upper”) => { "msg": "”upper”" } PLAY RECAP ********************************************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 ``` Ansible 2.9.18 Result: ``` PLAY [localhost] *************************************************************************************** TASK [debug] ******************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ lookup('ini', 'fubar file=test.properties type=properties' )}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'ini'. Error was a <class 'configparser.DuplicateOptionError'>, original message: While reading from '<???>' [line 3]: option 'fubar' in section 'java_properties' already exists"} PLAY RECAP ********************************************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` ### Issue Type Feature Idea ### Component Name ini.py ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ```yaml (paste below) ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74601
https://github.com/ansible/ansible/pull/74629
829c9c3d46b98fe1135e3c5a457714807b9e5544
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
2021-05-06T19:19:53Z
python
2021-05-12T20:57:02Z
test/integration/targets/lookup_ini/test_lookup_properties.yml
--- - name: "Lookup test" hosts: "localhost" # connection: local tasks: - name: "read properties value" set_fact: test1: "{{lookup('ini', 'value1 type=properties file=lookup.properties')}}" test2: "{{lookup('ini', 'value2', type='properties', file='lookup.properties')}}" test_dot: "{{lookup('ini', 'value.dot', type='properties', file='lookup.properties')}}" field_with_space: "{{lookup('ini', 'field.with.space type=properties file=lookup.properties')}}" - assert: that: "{{item}} is defined" with_items: [ 'test1', 'test2', 'test_dot', 'field_with_space' ] - name: "read ini value" set_fact: value1_global: "{{lookup('ini', 'value1', section='global', file='lookup.ini')}}" value2_global: "{{lookup('ini', 'value2', section='global', file='lookup.ini')}}" value1_section1: "{{lookup('ini', 'value1', section='section1', file='lookup.ini')}}" field_with_unicode: "{{lookup('ini', 'unicode', section='global', file='lookup.ini')}}" - debug: var={{item}} with_items: [ 'value1_global', 'value2_global', 'value1_section1', 'field_with_unicode' ] - assert: that: - "field_with_unicode == 'été indien où à château français ïîôû'" - name: "read ini value from iso8859-15 file" set_fact: field_with_unicode: "{{lookup('ini', 'field_with_unicode section=global encoding=iso8859-1 file=lookup-8859-15.ini')}}" - assert: that: - "field_with_unicode == 'été indien où à château français ïîôû'" - name: "read ini value with section and regexp" set_fact: value_section: "{{lookup('ini', 'value[1-2] section=value_section file=lookup.ini re=true')}}" other_section: "{{lookup('ini', 'other[1-2] section=other_section file=lookup.ini re=true')}}" - debug: var={{item}} with_items: [ 'value_section', 'other_section' ] - assert: that: - "value_section == '1,2'" - "other_section == '4,5'" - name: "Reading unknown value" set_fact: unknown: "{{lookup('ini', 'unknown default=unknown section=section1 file=lookup.ini')}}" - debug: var=unknown - assert: that: - 'unknown == "unknown"' - name: "Looping over section section1" debug: msg="{{item}}" with_ini: value[1-2] section=section1 file=lookup.ini re=true register: _ - assert: that: - '_.results.0.item == "section1/value1"' - '_.results.1.item == "section1/value2"' - name: "Looping over section value_section" debug: msg="{{item}}" with_ini: value[1-2] section=value_section file=lookup.ini re=true register: _ - assert: that: - '_.results.0.item == "1"' - '_.results.1.item == "2"' - debug: msg="{{item}}" with_ini: value[1-2] section=section1 file=lookup.ini re=true register: _ - assert: that: - '_.results.0.item == "section1/value1"' - '_.results.1.item == "section1/value2"' - name: capture bad behaviour block: - name: mix options type and push key out of order debug: msg="{{ lookup('ini', 'file=lookup.ini', 'value1', section='value_section') }}" register: bad_mojo ignore_errors: true - name: verify assert: that: - bad_mojo is failed - '"No key to lookup was provided as first term with in string inline option" in bad_mojo.msg'
closed
ansible/ansible
https://github.com/ansible/ansible
74,659
Update vendored `six` for Python 3.10
##### SUMMARY There are several options for resolving this: * Vendoring a second copy of `six` that implements `find_spec` and `exec_module`, and using it for newer Pythons. * Vendoring the latest version of `six` and updating it as needed to work back through Python 2.6. See also: https://github.com/ansible/ansible/issues/72952 ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME lib/ansible/module_utils/six/__init__.py
https://github.com/ansible/ansible/issues/74659
https://github.com/ansible/ansible/pull/74680
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
d6e28e68599e703c153914610152cf4492851eb3
2021-05-11T16:14:18Z
python
2021-05-12T21:26:48Z
changelogs/fragments/74659-update-six.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,659
Update vendored `six` for Python 3.10
##### SUMMARY There are several options for resolving this: * Vendoring a second copy of `six` that implements `find_spec` and `exec_module`, and using it for newer Pythons. * Vendoring the latest version of `six` and updating it as needed to work back through Python 2.6. See also: https://github.com/ansible/ansible/issues/72952 ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME lib/ansible/module_utils/six/__init__.py
https://github.com/ansible/ansible/issues/74659
https://github.com/ansible/ansible/pull/74680
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
d6e28e68599e703c153914610152cf4492851eb3
2021-05-11T16:14:18Z
python
2021-05-12T21:26:48Z
lib/ansible/module_utils/six/__init__.py
# This code is strewn with things that are not defined on Python3 (unicode, # long, etc) but they are all shielded by version checks. This is also an # upstream vendored file that we're not going to modify on our own # pylint: disable=undefined-variable # Copyright (c) 2010-2019 Benjamin Peterson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. """Utilities for writing code that runs on Python 2 and 3""" from __future__ import absolute_import import functools import itertools import operator import sys import types # The following makes it easier for us to script updates of the bundled code. It is not part of # upstream six # CANT_UPDATE due to py2.6 drop: https://github.com/benjaminp/six/pull/314 _BUNDLED_METADATA = {"pypi_name": "six", "version": "1.13.0"} __author__ = "Benjamin Peterson <[email protected]>" __version__ = "1.13.0" # Useful for very coarse version differentiation. PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY34 = sys.version_info[0:2] >= (3, 4) if PY3: string_types = str, integer_types = int, class_types = type, text_type = str binary_type = bytes MAXSIZE = sys.maxsize else: string_types = basestring, integer_types = (int, long) class_types = (type, types.ClassType) text_type = unicode binary_type = str if sys.platform.startswith("java"): # Jython always uses 32 bits. MAXSIZE = int((1 << 31) - 1) else: # It's possible to have sizeof(long) != sizeof(Py_ssize_t). class X(object): def __len__(self): return 1 << 31 try: len(X()) except OverflowError: # 32-bit MAXSIZE = int((1 << 31) - 1) else: # 64-bit MAXSIZE = int((1 << 63) - 1) del X def _add_doc(func, doc): """Add documentation to a function.""" func.__doc__ = doc def _import_module(name): """Import module, returning the module after the last dot.""" __import__(name) return sys.modules[name] class _LazyDescr(object): def __init__(self, name): self.name = name def __get__(self, obj, tp): result = self._resolve() setattr(obj, self.name, result) # Invokes __set__. try: # This is a bit ugly, but it avoids running this again by # removing this descriptor. delattr(obj.__class__, self.name) except AttributeError: pass return result class MovedModule(_LazyDescr): def __init__(self, name, old, new=None): super(MovedModule, self).__init__(name) if PY3: if new is None: new = name self.mod = new else: self.mod = old def _resolve(self): return _import_module(self.mod) def __getattr__(self, attr): _module = self._resolve() value = getattr(_module, attr) setattr(self, attr, value) return value class _LazyModule(types.ModuleType): def __init__(self, name): super(_LazyModule, self).__init__(name) self.__doc__ = self.__class__.__doc__ def __dir__(self): attrs = ["__doc__", "__name__"] attrs += [attr.name for attr in self._moved_attributes] return attrs # Subclasses should override this _moved_attributes = [] class MovedAttribute(_LazyDescr): def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): super(MovedAttribute, self).__init__(name) if PY3: if new_mod is None: new_mod = name self.mod = new_mod if new_attr is None: if old_attr is None: new_attr = name else: new_attr = old_attr self.attr = new_attr else: self.mod = old_mod if old_attr is None: old_attr = name self.attr = old_attr def _resolve(self): module = _import_module(self.mod) return getattr(module, self.attr) class _SixMetaPathImporter(object): """ A meta path importer to import six.moves and its submodules. This class implements a PEP302 finder and loader. It should be compatible with Python 2.5 and all existing versions of Python3 """ def __init__(self, six_module_name): self.name = six_module_name self.known_modules = {} def _add_module(self, mod, *fullnames): for fullname in fullnames: self.known_modules[self.name + "." + fullname] = mod def _get_module(self, fullname): return self.known_modules[self.name + "." + fullname] def find_module(self, fullname, path=None): if fullname in self.known_modules: return self return None def __get_module(self, fullname): try: return self.known_modules[fullname] except KeyError: raise ImportError("This loader does not know module " + fullname) def load_module(self, fullname): try: # in case of a reload return sys.modules[fullname] except KeyError: pass mod = self.__get_module(fullname) if isinstance(mod, MovedModule): mod = mod._resolve() else: mod.__loader__ = self sys.modules[fullname] = mod return mod def is_package(self, fullname): """ Return true, if the named module is a package. We need this method to get correct spec objects with Python 3.4 (see PEP451) """ return hasattr(self.__get_module(fullname), "__path__") def get_code(self, fullname): """Return None Required, if is_package is implemented""" self.__get_module(fullname) # eventually raises ImportError return None get_source = get_code # same as get_code _importer = _SixMetaPathImporter(__name__) class _MovedItems(_LazyModule): """Lazy loading of moved objects""" __path__ = [] # mark as package _moved_attributes = [ MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), MovedAttribute("intern", "__builtin__", "sys"), MovedAttribute("map", "itertools", "builtins", "imap", "map"), MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), MovedAttribute("getoutput", "commands", "subprocess"), MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), MovedAttribute("reduce", "__builtin__", "functools"), MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), MovedAttribute("StringIO", "StringIO", "io"), MovedAttribute("UserDict", "UserDict", "collections"), MovedAttribute("UserList", "UserList", "collections"), MovedAttribute("UserString", "UserString", "collections"), MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), MovedModule("builtins", "__builtin__"), MovedModule("configparser", "ConfigParser"), MovedModule("collections_abc", "collections", "collections.abc" if sys.version_info >= (3, 3) else "collections"), MovedModule("copyreg", "copy_reg"), MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), MovedModule("http_cookies", "Cookie", "http.cookies"), MovedModule("html_entities", "htmlentitydefs", "html.entities"), MovedModule("html_parser", "HTMLParser", "html.parser"), MovedModule("http_client", "httplib", "http.client"), MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), MovedModule("cPickle", "cPickle", "pickle"), MovedModule("queue", "Queue"), MovedModule("reprlib", "repr"), MovedModule("socketserver", "SocketServer"), MovedModule("_thread", "thread", "_thread"), MovedModule("tkinter", "Tkinter"), MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), MovedModule("tkinter_tix", "Tix", "tkinter.tix"), MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), MovedModule("tkinter_font", "tkFont", "tkinter.font"), MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), ] # Add windows specific modules. if sys.platform == "win32": _moved_attributes += [ MovedModule("winreg", "_winreg"), ] for attr in _moved_attributes: setattr(_MovedItems, attr.name, attr) if isinstance(attr, MovedModule): _importer._add_module(attr, "moves." + attr.name) del attr _MovedItems._moved_attributes = _moved_attributes moves = _MovedItems(__name__ + ".moves") _importer._add_module(moves, "moves") class Module_six_moves_urllib_parse(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_parse""" _urllib_parse_moved_attributes = [ MovedAttribute("ParseResult", "urlparse", "urllib.parse"), MovedAttribute("SplitResult", "urlparse", "urllib.parse"), MovedAttribute("parse_qs", "urlparse", "urllib.parse"), MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), MovedAttribute("urldefrag", "urlparse", "urllib.parse"), MovedAttribute("urljoin", "urlparse", "urllib.parse"), MovedAttribute("urlparse", "urlparse", "urllib.parse"), MovedAttribute("urlsplit", "urlparse", "urllib.parse"), MovedAttribute("urlunparse", "urlparse", "urllib.parse"), MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), MovedAttribute("quote", "urllib", "urllib.parse"), MovedAttribute("quote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote", "urllib", "urllib.parse"), MovedAttribute("unquote_plus", "urllib", "urllib.parse"), MovedAttribute("unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes"), MovedAttribute("urlencode", "urllib", "urllib.parse"), MovedAttribute("splitquery", "urllib", "urllib.parse"), MovedAttribute("splittag", "urllib", "urllib.parse"), MovedAttribute("splituser", "urllib", "urllib.parse"), MovedAttribute("splitvalue", "urllib", "urllib.parse"), MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), MovedAttribute("uses_params", "urlparse", "urllib.parse"), MovedAttribute("uses_query", "urlparse", "urllib.parse"), MovedAttribute("uses_relative", "urlparse", "urllib.parse"), ] for attr in _urllib_parse_moved_attributes: setattr(Module_six_moves_urllib_parse, attr.name, attr) del attr Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), "moves.urllib_parse", "moves.urllib.parse") class Module_six_moves_urllib_error(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_error""" _urllib_error_moved_attributes = [ MovedAttribute("URLError", "urllib2", "urllib.error"), MovedAttribute("HTTPError", "urllib2", "urllib.error"), MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), ] for attr in _urllib_error_moved_attributes: setattr(Module_six_moves_urllib_error, attr.name, attr) del attr Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), "moves.urllib_error", "moves.urllib.error") class Module_six_moves_urllib_request(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_request""" _urllib_request_moved_attributes = [ MovedAttribute("urlopen", "urllib2", "urllib.request"), MovedAttribute("install_opener", "urllib2", "urllib.request"), MovedAttribute("build_opener", "urllib2", "urllib.request"), MovedAttribute("pathname2url", "urllib", "urllib.request"), MovedAttribute("url2pathname", "urllib", "urllib.request"), MovedAttribute("getproxies", "urllib", "urllib.request"), MovedAttribute("Request", "urllib2", "urllib.request"), MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), MovedAttribute("BaseHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), MovedAttribute("FileHandler", "urllib2", "urllib.request"), MovedAttribute("FTPHandler", "urllib2", "urllib.request"), MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), MovedAttribute("urlretrieve", "urllib", "urllib.request"), MovedAttribute("urlcleanup", "urllib", "urllib.request"), MovedAttribute("URLopener", "urllib", "urllib.request"), MovedAttribute("FancyURLopener", "urllib", "urllib.request"), MovedAttribute("proxy_bypass", "urllib", "urllib.request"), MovedAttribute("parse_http_list", "urllib2", "urllib.request"), MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), ] for attr in _urllib_request_moved_attributes: setattr(Module_six_moves_urllib_request, attr.name, attr) del attr Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), "moves.urllib_request", "moves.urllib.request") class Module_six_moves_urllib_response(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_response""" _urllib_response_moved_attributes = [ MovedAttribute("addbase", "urllib", "urllib.response"), MovedAttribute("addclosehook", "urllib", "urllib.response"), MovedAttribute("addinfo", "urllib", "urllib.response"), MovedAttribute("addinfourl", "urllib", "urllib.response"), ] for attr in _urllib_response_moved_attributes: setattr(Module_six_moves_urllib_response, attr.name, attr) del attr Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), "moves.urllib_response", "moves.urllib.response") class Module_six_moves_urllib_robotparser(_LazyModule): """Lazy loading of moved objects in six.moves.urllib_robotparser""" _urllib_robotparser_moved_attributes = [ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), ] for attr in _urllib_robotparser_moved_attributes: setattr(Module_six_moves_urllib_robotparser, attr.name, attr) del attr Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), "moves.urllib_robotparser", "moves.urllib.robotparser") class Module_six_moves_urllib(types.ModuleType): """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" __path__ = [] # mark as package parse = _importer._get_module("moves.urllib_parse") error = _importer._get_module("moves.urllib_error") request = _importer._get_module("moves.urllib_request") response = _importer._get_module("moves.urllib_response") robotparser = _importer._get_module("moves.urllib_robotparser") def __dir__(self): return ['parse', 'error', 'request', 'response', 'robotparser'] _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib") def add_move(move): """Add an item to six.moves.""" setattr(_MovedItems, move.name, move) def remove_move(name): """Remove item from six.moves.""" try: delattr(_MovedItems, name) except AttributeError: try: del moves.__dict__[name] except KeyError: raise AttributeError("no such move, %r" % (name,)) if PY3: _meth_func = "__func__" _meth_self = "__self__" _func_closure = "__closure__" _func_code = "__code__" _func_defaults = "__defaults__" _func_globals = "__globals__" else: _meth_func = "im_func" _meth_self = "im_self" _func_closure = "func_closure" _func_code = "func_code" _func_defaults = "func_defaults" _func_globals = "func_globals" try: advance_iterator = next except NameError: def advance_iterator(it): return it.next() next = advance_iterator try: callable = callable except NameError: def callable(obj): return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) if PY3: def get_unbound_function(unbound): return unbound create_bound_method = types.MethodType def create_unbound_method(func, cls): return func Iterator = object else: def get_unbound_function(unbound): return unbound.im_func def create_bound_method(func, obj): return types.MethodType(func, obj, obj.__class__) def create_unbound_method(func, cls): return types.MethodType(func, None, cls) class Iterator(object): def next(self): return type(self).__next__(self) callable = callable _add_doc(get_unbound_function, """Get the function out of a possibly unbound function""") get_method_function = operator.attrgetter(_meth_func) get_method_self = operator.attrgetter(_meth_self) get_function_closure = operator.attrgetter(_func_closure) get_function_code = operator.attrgetter(_func_code) get_function_defaults = operator.attrgetter(_func_defaults) get_function_globals = operator.attrgetter(_func_globals) if PY3: def iterkeys(d, **kw): return iter(d.keys(**kw)) def itervalues(d, **kw): return iter(d.values(**kw)) def iteritems(d, **kw): return iter(d.items(**kw)) def iterlists(d, **kw): return iter(d.lists(**kw)) viewkeys = operator.methodcaller("keys") viewvalues = operator.methodcaller("values") viewitems = operator.methodcaller("items") else: def iterkeys(d, **kw): return d.iterkeys(**kw) def itervalues(d, **kw): return d.itervalues(**kw) def iteritems(d, **kw): return d.iteritems(**kw) def iterlists(d, **kw): return d.iterlists(**kw) viewkeys = operator.methodcaller("viewkeys") viewvalues = operator.methodcaller("viewvalues") viewitems = operator.methodcaller("viewitems") _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") _add_doc(itervalues, "Return an iterator over the values of a dictionary.") _add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") _add_doc(iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary.") if PY3: def b(s): return s.encode("latin-1") def u(s): return s unichr = chr import struct int2byte = struct.Struct(">B").pack del struct byte2int = operator.itemgetter(0) indexbytes = operator.getitem iterbytes = iter import io StringIO = io.StringIO BytesIO = io.BytesIO del io _assertCountEqual = "assertCountEqual" if sys.version_info[1] <= 1: _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" else: _assertRaisesRegex = "assertRaisesRegex" _assertRegex = "assertRegex" else: def b(s): return s # Workaround for standalone backslash def u(s): return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") unichr = unichr int2byte = chr def byte2int(bs): return ord(bs[0]) def indexbytes(buf, i): return ord(buf[i]) iterbytes = functools.partial(itertools.imap, ord) import StringIO StringIO = BytesIO = StringIO.StringIO _assertCountEqual = "assertItemsEqual" _assertRaisesRegex = "assertRaisesRegexp" _assertRegex = "assertRegexpMatches" _add_doc(b, """Byte literal""") _add_doc(u, """Text literal""") def assertCountEqual(self, *args, **kwargs): return getattr(self, _assertCountEqual)(*args, **kwargs) def assertRaisesRegex(self, *args, **kwargs): return getattr(self, _assertRaisesRegex)(*args, **kwargs) def assertRegex(self, *args, **kwargs): return getattr(self, _assertRegex)(*args, **kwargs) if PY3: exec_ = getattr(moves.builtins, "exec") def reraise(tp, value, tb=None): try: if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value finally: value = None tb = None else: def exec_(_code_, _globs_=None, _locs_=None): """Execute code in a namespace.""" if _globs_ is None: frame = sys._getframe(1) _globs_ = frame.f_globals if _locs_ is None: _locs_ = frame.f_locals del frame elif _locs_ is None: _locs_ = _globs_ exec("""exec _code_ in _globs_, _locs_""") exec_("""def reraise(tp, value, tb=None): try: raise tp, value, tb finally: tb = None """) if sys.version_info[:2] == (3, 2): exec_("""def raise_from(value, from_value): try: if from_value is None: raise value raise value from from_value finally: value = None """) elif sys.version_info[:2] > (3, 2): exec_("""def raise_from(value, from_value): try: raise value from from_value finally: value = None """) else: def raise_from(value, from_value): raise value print_ = getattr(moves.builtins, "print", None) if print_ is None: def print_(*args, **kwargs): """The new-style print function for Python 2.4 and 2.5.""" fp = kwargs.pop("file", sys.stdout) if fp is None: return def write(data): if not isinstance(data, basestring): data = str(data) # If the file has an encoding, encode unicode with it. if (isinstance(fp, file) and isinstance(data, unicode) and fp.encoding is not None): errors = getattr(fp, "errors", None) if errors is None: errors = "strict" data = data.encode(fp.encoding, errors) fp.write(data) want_unicode = False sep = kwargs.pop("sep", None) if sep is not None: if isinstance(sep, unicode): want_unicode = True elif not isinstance(sep, str): raise TypeError("sep must be None or a string") end = kwargs.pop("end", None) if end is not None: if isinstance(end, unicode): want_unicode = True elif not isinstance(end, str): raise TypeError("end must be None or a string") if kwargs: raise TypeError("invalid keyword arguments to print()") if not want_unicode: for arg in args: if isinstance(arg, unicode): want_unicode = True break if want_unicode: newline = unicode("\n") space = unicode(" ") else: newline = "\n" space = " " if sep is None: sep = space if end is None: end = newline for i, arg in enumerate(args): if i: write(sep) write(arg) write(end) if sys.version_info[:2] < (3, 3): _print = print_ def print_(*args, **kwargs): fp = kwargs.get("file", sys.stdout) flush = kwargs.pop("flush", False) _print(*args, **kwargs) if flush and fp is not None: fp.flush() _add_doc(reraise, """Reraise an exception.""") if sys.version_info[0:2] < (3, 4): def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, updated=functools.WRAPPER_UPDATES): def wrapper(f): f = functools.wraps(wrapped, assigned, updated)(f) f.__wrapped__ = wrapped return f return wrapper else: wraps = functools.wraps def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metaclass(type): def __new__(cls, name, this_bases, d): if sys.version_info[:2] >= (3, 7): # This version introduced PEP 560 that requires a bit # of extra care (we mimic what is done by __build_class__). resolved_bases = types.resolve_bases(bases) if resolved_bases is not bases: d['__orig_bases__'] = bases else: resolved_bases = bases return meta(name, resolved_bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {}) def add_metaclass(metaclass): """Class decorator for creating a class with a metaclass.""" def wrapper(cls): orig_vars = cls.__dict__.copy() slots = orig_vars.get('__slots__') if slots is not None: if isinstance(slots, str): slots = [slots] for slots_var in slots: orig_vars.pop(slots_var) orig_vars.pop('__dict__', None) orig_vars.pop('__weakref__', None) if hasattr(cls, '__qualname__'): orig_vars['__qualname__'] = cls.__qualname__ return metaclass(cls.__name__, cls.__bases__, orig_vars) return wrapper def ensure_binary(s, encoding='utf-8', errors='strict'): """Coerce **s** to six.binary_type. For Python 2: - `unicode` -> encoded to `str` - `str` -> `str` For Python 3: - `str` -> encoded to `bytes` - `bytes` -> `bytes` """ if isinstance(s, text_type): return s.encode(encoding, errors) elif isinstance(s, binary_type): return s else: raise TypeError("not expecting type '%s'" % type(s)) def ensure_str(s, encoding='utf-8', errors='strict'): """Coerce *s* to `str`. For Python 2: - `unicode` -> encoded to `str` - `str` -> `str` For Python 3: - `str` -> `str` - `bytes` -> decoded to `str` """ if not isinstance(s, (text_type, binary_type)): raise TypeError("not expecting type '%s'" % type(s)) if PY2 and isinstance(s, text_type): s = s.encode(encoding, errors) elif PY3 and isinstance(s, binary_type): s = s.decode(encoding, errors) return s def ensure_text(s, encoding='utf-8', errors='strict'): """Coerce *s* to six.text_type. For Python 2: - `unicode` -> `unicode` - `str` -> `unicode` For Python 3: - `str` -> `str` - `bytes` -> decoded to `str` """ if isinstance(s, binary_type): return s.decode(encoding, errors) elif isinstance(s, text_type): return s else: raise TypeError("not expecting type '%s'" % type(s)) def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if '__str__' not in klass.__dict__: raise ValueError("@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass # Complete the moves implementation. # This code is at the end of this module to speed up module loading. # Turn this module into a package. __path__ = [] # required for PEP 302 and PEP 451 __package__ = __name__ # see PEP 366 @ReservedAssignment if globals().get("__spec__") is not None: __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable # Remove other six meta path importers, since they cause problems. This can # happen if six is removed from sys.modules and then reloaded. (Setuptools does # this for some reason.) if sys.meta_path: for i, importer in enumerate(sys.meta_path): # Here's some real nastiness: Another "instance" of the six module might # be floating around. Therefore, we can't use isinstance() to check for # the six meta path importer, since the other six instance will have # inserted an importer with different class. if (type(importer).__name__ == "_SixMetaPathImporter" and importer.name == __name__): del sys.meta_path[i] break del i, importer # Finally, add the importer to the meta path import hook. sys.meta_path.append(_importer)
closed
ansible/ansible
https://github.com/ansible/ansible
74,659
Update vendored `six` for Python 3.10
##### SUMMARY There are several options for resolving this: * Vendoring a second copy of `six` that implements `find_spec` and `exec_module`, and using it for newer Pythons. * Vendoring the latest version of `six` and updating it as needed to work back through Python 2.6. See also: https://github.com/ansible/ansible/issues/72952 ##### ISSUE TYPE Feature Idea ##### COMPONENT NAME lib/ansible/module_utils/six/__init__.py
https://github.com/ansible/ansible/issues/74659
https://github.com/ansible/ansible/pull/74680
0affe4d027ef4ca5517c06da44dcd1b5b8e2544c
d6e28e68599e703c153914610152cf4492851eb3
2021-05-11T16:14:18Z
python
2021-05-12T21:26:48Z
test/lib/ansible_test/_data/sanity/import/importer.py
#!/usr/bin/env python """Import the given python module(s) and report error(s) encountered.""" from __future__ import (absolute_import, division, print_function) __metaclass__ = type def main(): """ Main program function used to isolate globals from imported code. Changes to globals in imported modules on Python 2.x will overwrite our own globals. """ import ansible import contextlib import datetime import json import os import re import runpy import subprocess import sys import traceback import types import warnings ansible_path = os.path.dirname(os.path.dirname(ansible.__file__)) temp_path = os.environ['SANITY_TEMP_PATH'] + os.path.sep external_python = os.environ.get('SANITY_EXTERNAL_PYTHON') or sys.executable collection_full_name = os.environ.get('SANITY_COLLECTION_FULL_NAME') collection_root = os.environ.get('ANSIBLE_COLLECTIONS_PATH') import_type = os.environ.get('SANITY_IMPORTER_TYPE') try: # noinspection PyCompatibility from importlib import import_module except ImportError: def import_module(name): __import__(name) return sys.modules[name] try: # noinspection PyCompatibility from StringIO import StringIO except ImportError: from io import StringIO if collection_full_name: # allow importing code from collections when testing a collection from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native, text_type from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder from ansible.utils.collection_loader import _collection_finder yaml_to_json_path = os.path.join(os.path.dirname(__file__), 'yaml_to_json.py') yaml_to_dict_cache = {} # unique ISO date marker matching the one present in yaml_to_json.py iso_date_marker = 'isodate:f23983df-f3df-453c-9904-bcd08af468cc:' iso_date_re = re.compile('^%s([0-9]{4})-([0-9]{2})-([0-9]{2})$' % iso_date_marker) def parse_value(value): """Custom value parser for JSON deserialization that recognizes our internal ISO date format.""" if isinstance(value, text_type): match = iso_date_re.search(value) if match: value = datetime.date(int(match.group(1)), int(match.group(2)), int(match.group(3))) return value def object_hook(data): """Object hook for custom ISO date deserialization from JSON.""" return dict((key, parse_value(value)) for key, value in data.items()) def yaml_to_dict(yaml, content_id): """ Return a Python dict version of the provided YAML. Conversion is done in a subprocess since the current Python interpreter does not have access to PyYAML. """ if content_id in yaml_to_dict_cache: return yaml_to_dict_cache[content_id] try: cmd = [external_python, yaml_to_json_path] proc = subprocess.Popen([to_bytes(c) for c in cmd], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout_bytes, stderr_bytes = proc.communicate(to_bytes(yaml)) if proc.returncode != 0: raise Exception('command %s failed with return code %d: %s' % ([to_native(c) for c in cmd], proc.returncode, to_native(stderr_bytes))) data = yaml_to_dict_cache[content_id] = json.loads(to_text(stdout_bytes), object_hook=object_hook) return data except Exception as ex: raise Exception('internal importer error - failed to parse yaml: %s' % to_native(ex)) _collection_finder._meta_yml_to_dict = yaml_to_dict # pylint: disable=protected-access collection_loader = _AnsibleCollectionFinder(paths=[collection_root]) # noinspection PyProtectedMember collection_loader._install() # pylint: disable=protected-access else: # do not support collection loading when not testing a collection collection_loader = None # remove all modules under the ansible package list(map(sys.modules.pop, [m for m in sys.modules if m.partition('.')[0] == ansible.__name__])) if import_type == 'module': # pre-load an empty ansible package to prevent unwanted code in __init__.py from loading # this more accurately reflects the environment that AnsiballZ runs modules under # it also avoids issues with imports in the ansible package that are not allowed ansible_module = types.ModuleType(ansible.__name__) ansible_module.__file__ = ansible.__file__ ansible_module.__path__ = ansible.__path__ ansible_module.__package__ = ansible.__package__ sys.modules[ansible.__name__] = ansible_module class ImporterAnsibleModuleException(Exception): """Exception thrown during initialization of ImporterAnsibleModule.""" class ImporterAnsibleModule: """Replacement for AnsibleModule to support import testing.""" def __init__(self, *args, **kwargs): raise ImporterAnsibleModuleException() class RestrictedModuleLoader: """Python module loader that restricts inappropriate imports.""" def __init__(self, path, name, restrict_to_module_paths): self.path = path self.name = name self.loaded_modules = set() self.restrict_to_module_paths = restrict_to_module_paths def find_module(self, fullname, path=None): """Return self if the given fullname is restricted, otherwise return None. :param fullname: str :param path: str :return: RestrictedModuleLoader | None """ if fullname in self.loaded_modules: return None # ignore modules that are already being loaded if is_name_in_namepace(fullname, ['ansible']): if not self.restrict_to_module_paths: return None # for non-modules, everything in the ansible namespace is allowed if fullname in ('ansible.module_utils.basic',): return self # intercept loading so we can modify the result if is_name_in_namepace(fullname, ['ansible.module_utils', self.name]): return None # module_utils and module under test are always allowed if any(os.path.exists(candidate_path) for candidate_path in convert_ansible_name_to_absolute_paths(fullname)): return self # restrict access to ansible files that exist return None # ansible file does not exist, do not restrict access if is_name_in_namepace(fullname, ['ansible_collections']): if not collection_loader: return self # restrict access to collections when we are not testing a collection if not self.restrict_to_module_paths: return None # for non-modules, everything in the ansible namespace is allowed if is_name_in_namepace(fullname, ['ansible_collections...plugins.module_utils', self.name]): return None # module_utils and module under test are always allowed if collection_loader.find_module(fullname, path): return self # restrict access to collection files that exist return None # collection file does not exist, do not restrict access # not a namespace we care about return None def load_module(self, fullname): """Raise an ImportError. :type fullname: str """ if fullname == 'ansible.module_utils.basic': module = self.__load_module(fullname) # stop Ansible module execution during AnsibleModule instantiation module.AnsibleModule = ImporterAnsibleModule # no-op for _load_params since it may be called before instantiating AnsibleModule module._load_params = lambda *args, **kwargs: {} # pylint: disable=protected-access return module raise ImportError('import of "%s" is not allowed in this context' % fullname) def __load_module(self, fullname): """Load the requested module while avoiding infinite recursion. :type fullname: str :rtype: module """ self.loaded_modules.add(fullname) return import_module(fullname) def run(restrict_to_module_paths): """Main program function.""" base_dir = os.getcwd() messages = set() for path in sys.argv[1:] or sys.stdin.read().splitlines(): name = convert_relative_path_to_name(path) test_python_module(path, name, base_dir, messages, restrict_to_module_paths) if messages: sys.exit(10) def test_python_module(path, name, base_dir, messages, restrict_to_module_paths): """Test the given python module by importing it. :type path: str :type name: str :type base_dir: str :type messages: set[str] :type restrict_to_module_paths: bool """ if name in sys.modules: return # cannot be tested because it has already been loaded is_ansible_module = (path.startswith('lib/ansible/modules/') or path.startswith('plugins/modules/')) and os.path.basename(path) != '__init__.py' run_main = is_ansible_module if path == 'lib/ansible/modules/async_wrapper.py': # async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function run_main = False capture_normal = Capture() capture_main = Capture() run_module_ok = False try: with monitor_sys_modules(path, messages): with restrict_imports(path, name, messages, restrict_to_module_paths): with capture_output(capture_normal): import_module(name) if run_main: run_module_ok = is_ansible_module with monitor_sys_modules(path, messages): with restrict_imports(path, name, messages, restrict_to_module_paths): with capture_output(capture_main): runpy.run_module(name, run_name='__main__', alter_sys=True) except ImporterAnsibleModuleException: # module instantiated AnsibleModule without raising an exception if not run_module_ok: if is_ansible_module: report_message(path, 0, 0, 'module-guard', "AnsibleModule instantiation not guarded by `if __name__ == '__main__'`", messages) else: report_message(path, 0, 0, 'non-module', "AnsibleModule instantiated by import of non-module", messages) except BaseException as ex: # pylint: disable=locally-disabled, broad-except # intentionally catch all exceptions, including calls to sys.exit exc_type, _exc, exc_tb = sys.exc_info() message = str(ex) results = list(reversed(traceback.extract_tb(exc_tb))) line = 0 offset = 0 full_path = os.path.join(base_dir, path) base_path = base_dir + os.path.sep source = None # avoid line wraps in messages message = re.sub(r'\n *', ': ', message) for result in results: if result[0] == full_path: # save the line number for the file under test line = result[1] or 0 if not source and result[0].startswith(base_path) and not result[0].startswith(temp_path): # save the first path and line number in the traceback which is in our source tree source = (os.path.relpath(result[0], base_path), result[1] or 0, 0) if isinstance(ex, SyntaxError): # SyntaxError has better information than the traceback if ex.filename == full_path: # pylint: disable=locally-disabled, no-member # syntax error was reported in the file under test line = ex.lineno or 0 # pylint: disable=locally-disabled, no-member offset = ex.offset or 0 # pylint: disable=locally-disabled, no-member elif ex.filename.startswith(base_path) and not ex.filename.startswith(temp_path): # pylint: disable=locally-disabled, no-member # syntax error was reported in our source tree source = (os.path.relpath(ex.filename, base_path), ex.lineno or 0, ex.offset or 0) # pylint: disable=locally-disabled, no-member # remove the filename and line number from the message # either it was extracted above, or it's not really useful information message = re.sub(r' \(.*?, line [0-9]+\)$', '', message) if source and source[0] != path: message += ' (at %s:%d:%d)' % (source[0], source[1], source[2]) report_message(path, line, offset, 'traceback', '%s: %s' % (exc_type.__name__, message), messages) finally: capture_report(path, capture_normal, messages) capture_report(path, capture_main, messages) def is_name_in_namepace(name, namespaces): """Returns True if the given name is one of the given namespaces, otherwise returns False.""" name_parts = name.split('.') for namespace in namespaces: namespace_parts = namespace.split('.') length = min(len(name_parts), len(namespace_parts)) truncated_name = name_parts[0:length] truncated_namespace = namespace_parts[0:length] # empty parts in the namespace are treated as wildcards # to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well for idx, part in enumerate(truncated_namespace): if not part: truncated_name[idx] = part # example: name=ansible, allowed_name=ansible.module_utils # example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils if truncated_name == truncated_namespace: return True return False def check_sys_modules(path, before, messages): """Check for unwanted changes to sys.modules. :type path: str :type before: dict[str, module] :type messages: set[str] """ after = sys.modules removed = set(before.keys()) - set(after.keys()) changed = set(key for key, value in before.items() if key in after and value != after[key]) # additions are checked by our custom PEP 302 loader, so we don't need to check them again here for module in sorted(removed): report_message(path, 0, 0, 'unload', 'unloading of "%s" in sys.modules is not supported' % module, messages) for module in sorted(changed): report_message(path, 0, 0, 'reload', 'reloading of "%s" in sys.modules is not supported' % module, messages) def convert_ansible_name_to_absolute_paths(name): """Calculate the module path from the given name. :type name: str :rtype: list[str] """ return [ os.path.join(ansible_path, name.replace('.', os.path.sep)), os.path.join(ansible_path, name.replace('.', os.path.sep)) + '.py', ] def convert_relative_path_to_name(path): """Calculate the module name from the given path. :type path: str :rtype: str """ if path.endswith('/__init__.py'): clean_path = os.path.dirname(path) else: clean_path = path clean_path = os.path.splitext(clean_path)[0] name = clean_path.replace(os.path.sep, '.') if collection_loader: # when testing collections the relative paths (and names) being tested are within the collection under test name = 'ansible_collections.%s.%s' % (collection_full_name, name) else: # when testing ansible all files being imported reside under the lib directory name = name[len('lib/'):] return name class Capture: """Captured output and/or exception.""" def __init__(self): self.stdout = StringIO() self.stderr = StringIO() def capture_report(path, capture, messages): """Report on captured output. :type path: str :type capture: Capture :type messages: set[str] """ if capture.stdout.getvalue(): first = capture.stdout.getvalue().strip().splitlines()[0].strip() report_message(path, 0, 0, 'stdout', first, messages) if capture.stderr.getvalue(): first = capture.stderr.getvalue().strip().splitlines()[0].strip() report_message(path, 0, 0, 'stderr', first, messages) def report_message(path, line, column, code, message, messages): """Report message if not already reported. :type path: str :type line: int :type column: int :type code: str :type message: str :type messages: set[str] """ message = '%s:%d:%d: %s: %s' % (path, line, column, code, message) if message not in messages: messages.add(message) print(message) @contextlib.contextmanager def restrict_imports(path, name, messages, restrict_to_module_paths): """Restrict available imports. :type path: str :type name: str :type messages: set[str] :type restrict_to_module_paths: bool """ restricted_loader = RestrictedModuleLoader(path, name, restrict_to_module_paths) # noinspection PyTypeChecker sys.meta_path.insert(0, restricted_loader) sys.path_importer_cache.clear() try: yield finally: if import_type == 'plugin': from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder _AnsibleCollectionFinder._remove() # pylint: disable=protected-access if sys.meta_path[0] != restricted_loader: report_message(path, 0, 0, 'metapath', 'changes to sys.meta_path[0] are not permitted', messages) while restricted_loader in sys.meta_path: # noinspection PyTypeChecker sys.meta_path.remove(restricted_loader) sys.path_importer_cache.clear() @contextlib.contextmanager def monitor_sys_modules(path, messages): """Monitor sys.modules for unwanted changes, reverting any additions made to our own namespaces.""" snapshot = sys.modules.copy() try: yield finally: check_sys_modules(path, snapshot, messages) for key in set(sys.modules.keys()) - set(snapshot.keys()): if is_name_in_namepace(key, ('ansible', 'ansible_collections')): del sys.modules[key] # only unload our own code since we know it's native Python @contextlib.contextmanager def capture_output(capture): """Capture sys.stdout and sys.stderr. :type capture: Capture """ old_stdout = sys.stdout old_stderr = sys.stderr sys.stdout = capture.stdout sys.stderr = capture.stderr # clear all warnings registries to make all warnings available for module in sys.modules.values(): try: # noinspection PyUnresolvedReferences module.__warningregistry__.clear() except AttributeError: pass with warnings.catch_warnings(): warnings.simplefilter('error') if sys.version_info[0] == 2: warnings.filterwarnings( "ignore", "Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography," " and will be removed in a future release.") warnings.filterwarnings( "ignore", "Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography," " and will be removed in the next release.") if sys.version_info[:2] == (3, 5): warnings.filterwarnings( "ignore", "Python 3.5 support will be dropped in the next release ofcryptography. Please upgrade your Python.") warnings.filterwarnings( "ignore", "Python 3.5 support will be dropped in the next release of cryptography. Please upgrade your Python.") if sys.version_info >= (3, 10): # Temporary solution for Python 3.10 until find_spec is implemented in RestrictedModuleLoader. # That implementation is dependent on find_spec being added to the controller's collection loader first. # The warning text is: main.<locals>.RestrictedModuleLoader.find_spec() not found; falling back to find_module() warnings.filterwarnings( "ignore", r"main\.<locals>\.RestrictedModuleLoader\.find_spec\(\) not found; falling back to find_module\(\)", ) # Temporary solution for Python 3.10 until exec_module is implemented in RestrictedModuleLoader. # That implementation is dependent on exec_module being added to the controller's collection loader first. # The warning text is: main.<locals>.RestrictedModuleLoader.exec_module() not found; falling back to load_module() warnings.filterwarnings( "ignore", r"main\.<locals>\.RestrictedModuleLoader\.exec_module\(\) not found; falling back to load_module\(\)", ) # Temporary solution for Python 3.10 until find_spec is implemented in the controller's collection loader. warnings.filterwarnings( "ignore", r"_Ansible.*Finder\.find_spec\(\) not found; falling back to find_module\(\)", ) # Temporary solution for Python 3.10 until exec_module is implemented in the controller's collection loader. warnings.filterwarnings( "ignore", r"_Ansible.*Loader\.exec_module\(\) not found; falling back to load_module\(\)", ) # Temporary solution until we have a vendored version of six that avoids the warnings on Python 3.10. # The warning text is: _SixMetaPathImporter.find_spec() not found; falling back to find_module() warnings.filterwarnings( "ignore", r"_SixMetaPathImporter\.find_spec\(\) not found; falling back to find_module\(\)", ) # Temporary solution until we have a vendored version of six that avoids the warnings on Python 3.10. # The warning text is: _SixMetaPathImporter.exec_module() not found; falling back to load_module() warnings.filterwarnings( "ignore", r"_SixMetaPathImporter\.exec_module\(\) not found; falling back to load_module\(\)", ) # Temporary solution until there is a vendored copy of distutils.version in module_utils. # Some of our dependencies such as packaging.tags also import distutils, which we have no control over # The warning text is: The distutils package is deprecated and slated for removal in Python 3.12. # Use setuptools or check PEP 632 for potential alternatives warnings.filterwarnings( "ignore", r"The distutils package is deprecated and slated for removal in Python 3\.12\. .*", ) try: yield finally: sys.stdout = old_stdout sys.stderr = old_stderr run(import_type == 'module') if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
73,792
Update documentation on format macros
### Summary https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_documenting.html#linking-and-other-format-macros-within-module-documentation gives incorrect advice. The tables of options in module documentation are formatted as static HTML tables embedded in the reStructured Text, rather than as rST that Sphinx will parse. So the `R()` and `M()` formats will not work in that context. We need to use `L()` or `U()` instead. See https://github.com/ansible/ansible/pull/73789#issuecomment-791005059 for an example. https://github.com/ansible/ansible/pull/73789/commits/991d82006e368571f4be7b419fef114a20d76f59 should have worked, based on our current documentation, but the output comes out as `<span class='module'>user password</span>` instead of as `<a href>`. ### Issue Type Documentation Report ### Component Name docs/docsite/rst/dev_guide/developing_modules_documenting.rst ### Ansible Version devel ### Configuration N/A ### OS / Environment N/A ### Additional Information Really? I have to add something here? Couldn't we make this section optional?
https://github.com/ansible/ansible/issues/73792
https://github.com/ansible/ansible/pull/74708
8d3dce49bf27ca2baa6bb06f9a345da9228b63f8
bf9944266c45ed3e9001520e1894bf9a212f49de
2021-03-04T23:06:20Z
python
2021-05-14T19:48:21Z
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
.. _developing_modules_documenting: .. _module_documenting: ******************************* Module format and documentation ******************************* If you want to contribute your module to most Ansible collections, you must write your module in Python and follow the standard format described below. (Unless you're writing a Windows module, in which case the :ref:`Windows guidelines <developing_modules_general_windows>` apply.) In addition to following this format, you should review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request. Every Ansible module written in Python must begin with seven standard sections in a particular order, followed by the code. The sections in order are: .. contents:: :depth: 1 :local: .. note:: Why don't the imports go first? Keen Python programmers may notice that contrary to PEP 8's advice we don't put ``imports`` at the top of the file. This is because the ``DOCUMENTATION`` through ``RETURN`` sections are not used by the module code itself; they are essentially extra docstrings for the file. The imports are placed after these special variables for the same reason as PEP 8 puts the imports after the introductory comments and docstrings. This keeps the active parts of the code together and the pieces which are purely informational apart. The decision to exclude E402 is based on readability (which is what PEP 8 is about). Documentation strings in a module are much more similar to module level docstrings, than code, and are never utilized by the module itself. Placing the imports below this documentation and closer to the code, consolidates and groups all related code in a congruent manner to improve readability, debugging and understanding. .. warning:: **Copy old modules with care!** Some older Ansible modules have ``imports`` at the bottom of the file, ``Copyright`` notices with the full GPL prefix, and/or ``DOCUMENTATION`` fields in the wrong order. These are legacy files that need updating - do not copy them into new modules. Over time we are updating and correcting older modules. Please follow the guidelines on this page! .. _shebang: Python shebang & UTF-8 coding =============================== Begin your Ansible module with ``#!/usr/bin/python`` - this "shebang" allows ``ansible_python_interpreter`` to work. Follow the shebang immediately with ``# -*- coding: utf-8 -*-`` to clarify that the file is UTF-8 encoded. .. _copyright: Copyright and license ===================== After the shebang and UTF-8 coding, add a `copyright line <https://www.gnu.org/licenses/gpl-howto.en.html>`_ with the original copyright holder and a license declaration. The license declaration should be ONLY one line, not the full GPL prefix.: .. code-block:: python #!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2018, Terry Jones <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) Major additions to the module (for instance, rewrites) may add additional copyright lines. Any legal review will include the source control history, so an exhaustive copyright header is not necessary. Please do not edit the existing copyright year. This simplifies project administration and is unlikely to cause any interesting legal issues. When adding a second copyright line for a significant feature or rewrite, add the newer line above the older one: .. code-block:: python #!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2017, [New Contributor(s)] # Copyright: (c) 2015, [Original Contributor(s)] # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) .. _ansible_metadata_block: ANSIBLE_METADATA block ====================== Since we moved to collections we have deprecated the METADATA functionality, it is no longer required for modules, but it will not break anything if present. .. _documentation_block: DOCUMENTATION block =================== After the shebang, the UTF-8 coding, the copyright line, and the license section comes the ``DOCUMENTATION`` block. Ansible's online module documentation is generated from the ``DOCUMENTATION`` blocks in each module's source code. The ``DOCUMENTATION`` block must be valid YAML. You may find it easier to start writing your ``DOCUMENTATION`` string in an :ref:`editor with YAML syntax highlighting <other_tools_and_programs>` before you include it in your Python file. You can start by copying our `example documentation string <https://github.com/ansible/ansible/blob/devel/examples/DOCUMENTATION.yml>`_ into your module file and modifying it. If you run into syntax issues in your YAML, you can validate it on the `YAML Lint <http://www.yamllint.com/>`_ website. Module documentation should briefly and accurately define what each module and option does, and how it works with others in the underlying system. Documentation should be written for broad audience--readable both by experts and non-experts. * Descriptions should always start with a capital letter and end with a full stop. Consistency always helps. * Verify that arguments in doc and module spec dict are identical. * For password / secret arguments ``no_log=True`` should be set. * For arguments that seem to contain sensitive information but **do not** contain secrets, such as "password_length", set ``no_log=False`` to disable the warning message. * If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)." * If your module allows ``check_mode``, reflect this fact in the documentation. To create clear, concise, consistent, and useful documentation, follow the :ref:`style guide <style_guide>`. Each documentation field is described below. Before committing your module documentation, please test it at the command line and as HTML: * As long as your module file is :ref:`available locally <local_modules>`, you can use ``ansible-doc -t module my_module_name`` to view your module documentation at the command line. Any parsing errors will be obvious - you can view details by adding ``-vvv`` to the command. * You should also :ref:`test the HTML output <testing_module_documentation>` of your module documentation. Documentation fields -------------------- All fields in the ``DOCUMENTATION`` block are lower-case. All fields are required unless specified otherwise: :module: * The name of the module. * Must be the same as the filename, without the ``.py`` extension. :short_description: * A short description which is displayed on the :ref:`list_of_collections` page and ``ansible-doc -l``. * The ``short_description`` is displayed by ``ansible-doc -l`` without any category grouping, so it needs enough detail to explain the module's purpose without the context of the directory structure in which it lives. * Unlike ``description:``, ``short_description`` should not have a trailing period/full stop. :description: * A detailed description (generally two or more sentences). * Must be written in full sentences, in other words, with capital letters and periods/full stops. * Shouldn't mention the module name. * Make use of multiple entries rather than using one long paragraph. * Don't quote complete values unless it is required by YAML. :version_added: * The version of Ansible when the module was added. * This is a string, and not a float, for example, ``version_added: '2.1'``. * In collections, this must be the collection version the module was added to, not the Ansible version. For example, ``version_added: 1.0.0``. :author: * Name of the module author in the form ``First Last (@GitHubID)``. * Use a multi-line list if there is more than one author. * Don't use quotes as it should not be required by YAML. :deprecated: * Marks modules that will be removed in future releases. See also :ref:`module_lifecycle`. :options: * Options are often called `parameters` or `arguments`. Because the documentation field is called `options`, we will use that term. * If the module has no options (for example, it's a ``_facts`` module), all you need is one line: ``options: {}``. * If your module has options (in other words, accepts arguments), each option should be documented thoroughly. For each module option, include: :option-name: * Declarative operation (not CRUD), to focus on the final state, for example `online:`, rather than `is_online:`. * The name of the option should be consistent with the rest of the module, as well as other modules in the same category. * When in doubt, look for other modules to find option names that are used for the same purpose, we like to offer consistency to our users. :description: * Detailed explanation of what this option does. It should be written in full sentences. * The first entry is a description of the option itself; subsequent entries detail its use, dependencies, or format of possible values. * Should not list the possible values (that's what ``choices:`` is for, though it should explain what the values do if they aren't obvious). * If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)." * Mutually exclusive options must be documented as the final sentence on each of the options. :required: * Only needed if ``true``. * If missing, we assume the option is not required. :default: * If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing). * Ensure that the default value in the docs matches the default value in the code. * The default field must not be listed as part of the description, unless it requires additional information or conditions. * If the option is a boolean value, you can use any of the boolean values recognized by Ansible: (such as true/false or yes/no). Choose the one that reads better in the context of the option. :choices: * List of option values. * Should be absent if empty. :type: * Specifies the data type that option accepts, must match the ``argspec``. * If an argument is ``type='bool'``, this field should be set to ``type: bool`` and no ``choices`` should be specified. * If an argument is ``type='list'``, ``elements`` should be specified. :elements: * Specifies the data type for list elements in case ``type='list'``. :aliases: * List of optional name aliases. * Generally not needed. :version_added: * Only needed if this option was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field. * This is a string, and not a float, for example, ``version_added: '2.3'``. * In collections, this must be the collection version the option was added to, not the Ansible version. For example, ``version_added: 1.0.0``. :suboptions: * If this option takes a dict or list of dicts, you can define the structure here. * See :ref:`ansible_collections.azure.azcollection.azure_rm_securitygroup_module`, :ref:`ansible_collections.azure.azcollection.azure_rm_azurefirewall_module`, and :ref:`ansible_collections.openstack.cloud.baremetal_node_action_module` for examples. :requirements: * List of requirements (if applicable). * Include minimum versions. :seealso: * A list of references to other modules, documentation or Internet resources * In Ansible 2.10 and later, references to modules must use the FQCN or ``ansible.builtin`` for modules in ``ansible-core``. * A reference can be one of the following formats: .. code-block:: yaml+jinja seealso: # Reference by module name - module: cisco.aci.aci_tenant # Reference by module name, including description - module: cisco.aci.aci_tenant description: ACI module to create tenants on a Cisco ACI fabric. # Reference by rST documentation anchor - ref: aci_guide description: Detailed information on how to manage your ACI infrastructure using Ansible. # Reference by Internet resource - name: APIC Management Information Model reference description: Complete reference of the APIC object model. link: https://developer.cisco.com/docs/apic-mim-ref/ :notes: * Details of any important information that doesn't fit in one of the above sections. * For example, whether ``check_mode`` is or is not supported. Linking and other format macros within module documentation ----------------------------------------------------------- You can link from your module documentation to other module docs, other resources on docs.ansible.com, and resources elsewhere on the internet with the help of some pre-defined macros. The correct formats for these macros are: * ``L()`` for links with a heading. For example: ``See L(Ansible Automation Platform,https://www.ansible.com/products/automation-platform).`` As of Ansible 2.10, do not use ``L()`` for relative links between Ansible documentation and collection documentation. * ``U()`` for URLs. For example: ``See U(https://www.ansible.com/products/automation-platform) for an overview.`` * ``R()`` for cross-references with a heading (added in Ansible 2.10). For example: ``See R(Cisco IOS Platform Guide,ios_platform_options)``. Use the RST anchor for the cross-reference. See :ref:`adding_anchors_rst` for details. * ``M()`` for module names. For example: ``See also M(ansible.builtin.yum) or M(community.general.apt_rpm)``. There are also some macros which do not create links but we use them to display certain types of content in a uniform way: * ``I()`` for option names. For example: ``Required if I(state=present).`` This is italicized in the documentation. * ``C()`` for files, option values, and inline code. For example: ``If not set the environment variable C(ACME_PASSWORD) will be used.`` or ``Use C(var | foo.bar.my_filter) to transform C(var) into the required format.`` This displays with a mono-space font in the documentation. * ``B()`` currently has no standardized usage. It is displayed in boldface in the documentation. * ``HORIZONTALLINE`` is used sparingly as a separator in long descriptions. It becomes a horizontal rule (the ``<hr>`` html tag) in the documentation. .. note:: For links between modules and documentation within a collection, you can use any of the options above. For links outside of your collection, use ``R()`` if available. Otherwise, use ``U()`` or ``L()`` with full URLs (not relative links). For modules, use ``M()`` with the FQCN or ``ansible.builtin`` as shown in the example. If you are creating your own documentation site, you will need to use the `intersphinx extension <https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html>`_ to convert ``R()`` and ``M()`` to the correct links. .. note:: - To refer to a group of modules in a collection, use ``R()``. When a collection is not the right granularity, use ``C(..)``: - ``Refer to the R(kubernetes.core collection, plugins_in_kubernetes.core) for information on managing kubernetes clusters.`` - ``The C(win_*) modules (spread across several collections) allow you to manage various aspects of windows hosts.`` .. note:: Because it stands out better, use ``seealso`` for general references over the use of notes or adding links to the description. .. _module_docs_fragments: Documentation fragments ----------------------- If you are writing multiple related modules, they may share common documentation, such as authentication details, file mode settings, ``notes:`` or ``seealso:`` entries. Rather than duplicate that information in each module's ``DOCUMENTATION`` block, you can save it once as a doc_fragment plugin and use it in each module's documentation. In Ansible, shared documentation fragments are contained in a ``ModuleDocFragment`` class in `lib/ansible/plugins/doc_fragments/ <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/doc_fragments>`_ or the equivalent directory in a collection. To include a documentation fragment, add ``extends_documentation_fragment: FRAGMENT_NAME`` in your module documentation. Use the fully qualified collection name for the FRAGMENT_NAME (for example, ``kubernetes.core.k8s_auth_options``). Modules should only use items from a doc fragment if the module will implement all of the interface documented there in a manner that behaves the same as the existing modules which import that fragment. The goal is that items imported from the doc fragment will behave identically when used in another module that imports the doc fragment. By default, only the ``DOCUMENTATION`` property from a doc fragment is inserted into the module documentation. It is possible to define additional properties in the doc fragment in order to import only certain parts of a doc fragment or mix and match as appropriate. If a property is defined in both the doc fragment and the module, the module value overrides the doc fragment. Here is an example doc fragment named ``example_fragment.py``: .. code-block:: python class ModuleDocFragment(object): # Standard documentation DOCUMENTATION = r''' options: # options here ''' # Additional section OTHER = r''' options: # other options here ''' To insert the contents of ``OTHER`` in a module: .. code-block:: yaml+jinja extends_documentation_fragment: example_fragment.other Or use both : .. code-block:: yaml+jinja extends_documentation_fragment: - example_fragment - example_fragment.other .. _note: * Prior to Ansible 2.8, documentation fragments were kept in ``lib/ansible/utils/module_docs_fragments``. .. versionadded:: 2.8 Since Ansible 2.8, you can have user-supplied doc_fragments by using a ``doc_fragments`` directory adjacent to play or role, just like any other plugin. For example, all AWS modules should include: .. code-block:: yaml+jinja extends_documentation_fragment: - aws - ec2 :ref:`docfragments_collections` describes how to incorporate documentation fragments in a collection. .. _examples_block: EXAMPLES block ============== After the shebang, the UTF-8 coding, the copyright line, the license section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module. Per playbook best practices, each example should include a ``name:`` line:: EXAMPLES = r''' - name: Ensure foo is installed namespace.collection.modulename: name: foo state: present ''' The ``name:`` line should be capitalized and not include a trailing dot. Use a fully qualified collection name (FQCN) as a part of the module's name like in the example above. For modules in ``ansible-core``, use the ``ansible.builtin.`` identifier, for example ``ansible.builtin.debug``. If your examples use boolean options, use yes/no values. Since the documentation generates boolean values as yes/no, having the examples use these values as well makes the module documentation more consistent. If your module returns facts that are often needed, an example of how to use them can be helpful. .. _return_block: RETURN block ============ After the shebang, the UTF-8 coding, the copyright line, the license section, ``DOCUMENTATION`` and ``EXAMPLES`` blocks comes the ``RETURN`` block. This section documents the information the module returns for use by other modules. If your module doesn't return anything (apart from the standard returns), this section of your module should read: ``RETURN = r''' # '''`` Otherwise, for each value returned, provide the following fields. All fields are required unless specified otherwise. :return name: Name of the returned field. :description: Detailed description of what this value represents. Capitalized and with trailing dot. :returned: When this value is returned, such as ``always``, ``changed`` or ``success``. This is a string and can contain any human-readable content. :type: Data type. :elements: If ``type='list'``, specifies the data type of the list's elements. :sample: One or more examples. :version_added: Only needed if this return was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field. This is a string, and not a float, for example, ``version_added: '2.3'``. :contains: Optional. To describe nested return values, set ``type: dict``, or ``type: list``/``elements: dict``, or if you really have to, ``type: complex``, and repeat the elements above for each sub-field. Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field:: RETURN = r''' dest: description: Destination file/path. returned: success type: str sample: /path/to/file.txt src: description: Source file used for the copy on the target machine. returned: changed type: str sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source md5sum: description: MD5 checksum of the file after running copy. returned: when supported type: str sample: 2a5aeecc61dc98c4d780b14b330e3282 ''' RETURN = r''' packages: description: Information about package requirements. returned: success type: dict contains: missing: description: Packages that are missing from the system. returned: success type: list elements: str sample: - libmysqlclient-dev - libxml2-dev badversion: description: Packages that are installed but at bad versions. returned: success type: list elements: dict sample: - package: libxml2-dev version: 2.9.4+dfsg1-2 constraint: ">= 3.0" ''' .. _python_imports: Python imports ============== After the shebang, the UTF-8 coding, the copyright line, the license, and the sections for ``DOCUMENTATION``, ``EXAMPLES``, and ``RETURN``, you can finally add the python imports. All modules must use Python imports in the form: .. code-block:: python from module_utils.basic import AnsibleModule The use of "wildcard" imports such as ``from module_utils.basic import *`` is no longer allowed. .. _dev_testing_module_documentation: Testing module documentation ============================ To test Ansible documentation locally please :ref:`follow instruction<testing_module_documentation>`.
closed
ansible/ansible
https://github.com/ansible/ansible
74,578
ansible_pkg_mgr fact always returns atomic_container if rpm-ostree is present (breaks package module on some systems)
### Summary For some modules, notably the `package` module, Ansible relies on the `ansible_pkg_mgr" fact to decide which package manager it should be using. On systems where `rpm-ostree` is present at `/usr/bin/rpm-ostree`, Ansible makes the assumption that the system's dominant package manager is `rpm-ostree` and returns `atomic_container` as the entry for the `ansible_pkg_mgr` fact. This isn't always a safe assumption to make, however. On systems with [OSBuild](https://www.osbuild.org/) installed, particularly OSBuild Composer, there's a dependency tree that leads to `rpm-ostree` being pulled in. You can see this by installing the RHEL 8 / CentOS 8 / Rocky 8 package `osbuild-composer`. It has a dependency on `osbuild-ostree`, which in turn depends on `rpm-ostree`. So I have quite a few (standard, `dnf`-based) RHEL and Rocky systems which do have `/usr/bin/rpm-ostree` present for legitimate reasons, but not as the dominant package manager. This is reproducible in 2.9.20 and 2.10.9. I haven't tried other versions but the problem is still present in the code in the devel branch. ### Issue Type Bug Report ### Component Name pkg_mgr.py ### Ansible Version ```console ╰─ ansible --version ansible 2.10.9 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Apr 12 2021, 07:42:28) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ### Configuration ```console ╰─ ansible-config dump --only-changed ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ### OS / Environment RHEL 8, Rocky 8 ### Steps to Reproduce ```sh ╰─ ansible -m setup localhost | grep pkg "ansible_pkg_mgr": "dnf", ``` ```sh ╰─ sudo dnf install osbuild-composer -y [...] (or any package that depends on rpm-ostree) [...] ``` ```sh ╰─ ansible -m setup localhost | grep pkg "ansible_pkg_mgr": "atomic_container", ``` ### Expected Results The ansible_pkg_mgr fact should not change just because rpm-ostree is installed. It should only change if the host's actual package manager is ostree. ### Actual Results ```console The ansible_pkg_mgr fact changes just because rpm-ostree is installed. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74578
https://github.com/ansible/ansible/pull/74579
d2d1f01f9d067e82b6af0da156a24627d04b396c
724a0c867e9c44199f2b50d888f1291aa4d7c11c
2021-05-05T16:52:29Z
python
2021-05-19T20:15:26Z
changelogs/fragments/74578-fix-ostree-detection.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,578
ansible_pkg_mgr fact always returns atomic_container if rpm-ostree is present (breaks package module on some systems)
### Summary For some modules, notably the `package` module, Ansible relies on the `ansible_pkg_mgr" fact to decide which package manager it should be using. On systems where `rpm-ostree` is present at `/usr/bin/rpm-ostree`, Ansible makes the assumption that the system's dominant package manager is `rpm-ostree` and returns `atomic_container` as the entry for the `ansible_pkg_mgr` fact. This isn't always a safe assumption to make, however. On systems with [OSBuild](https://www.osbuild.org/) installed, particularly OSBuild Composer, there's a dependency tree that leads to `rpm-ostree` being pulled in. You can see this by installing the RHEL 8 / CentOS 8 / Rocky 8 package `osbuild-composer`. It has a dependency on `osbuild-ostree`, which in turn depends on `rpm-ostree`. So I have quite a few (standard, `dnf`-based) RHEL and Rocky systems which do have `/usr/bin/rpm-ostree` present for legitimate reasons, but not as the dominant package manager. This is reproducible in 2.9.20 and 2.10.9. I haven't tried other versions but the problem is still present in the code in the devel branch. ### Issue Type Bug Report ### Component Name pkg_mgr.py ### Ansible Version ```console ╰─ ansible --version ansible 2.10.9 config file = None configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Apr 12 2021, 07:42:28) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] ``` ### Configuration ```console ╰─ ansible-config dump --only-changed ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ### OS / Environment RHEL 8, Rocky 8 ### Steps to Reproduce ```sh ╰─ ansible -m setup localhost | grep pkg "ansible_pkg_mgr": "dnf", ``` ```sh ╰─ sudo dnf install osbuild-composer -y [...] (or any package that depends on rpm-ostree) [...] ``` ```sh ╰─ ansible -m setup localhost | grep pkg "ansible_pkg_mgr": "atomic_container", ``` ### Expected Results The ansible_pkg_mgr fact should not change just because rpm-ostree is installed. It should only change if the host's actual package manager is ostree. ### Actual Results ```console The ansible_pkg_mgr fact changes just because rpm-ostree is installed. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74578
https://github.com/ansible/ansible/pull/74579
d2d1f01f9d067e82b6af0da156a24627d04b396c
724a0c867e9c44199f2b50d888f1291aa4d7c11c
2021-05-05T16:52:29Z
python
2021-05-19T20:15:26Z
lib/ansible/module_utils/facts/system/pkg_mgr.py
# Collect facts related to the system package manager # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import subprocess from ansible.module_utils.facts.collector import BaseFactCollector # A list of dicts. If there is a platform with more than one # package manager, put the preferred one last. If there is an # ansible module, use that as the value for the 'name' key. PKG_MGRS = [{'path': '/usr/bin/yum', 'name': 'yum'}, {'path': '/usr/bin/dnf', 'name': 'dnf'}, {'path': '/usr/bin/apt-get', 'name': 'apt'}, {'path': '/usr/bin/zypper', 'name': 'zypper'}, {'path': '/usr/sbin/urpmi', 'name': 'urpmi'}, {'path': '/usr/bin/pacman', 'name': 'pacman'}, {'path': '/bin/opkg', 'name': 'opkg'}, {'path': '/usr/pkg/bin/pkgin', 'name': 'pkgin'}, {'path': '/opt/local/bin/pkgin', 'name': 'pkgin'}, {'path': '/opt/tools/bin/pkgin', 'name': 'pkgin'}, {'path': '/opt/local/bin/port', 'name': 'macports'}, {'path': '/usr/local/bin/brew', 'name': 'homebrew'}, {'path': '/opt/homebrew/bin/brew', 'name': 'homebrew'}, {'path': '/sbin/apk', 'name': 'apk'}, {'path': '/usr/sbin/pkg', 'name': 'pkgng'}, {'path': '/usr/sbin/swlist', 'name': 'swdepot'}, {'path': '/usr/bin/emerge', 'name': 'portage'}, {'path': '/usr/sbin/pkgadd', 'name': 'svr4pkg'}, {'path': '/usr/bin/pkg', 'name': 'pkg5'}, {'path': '/usr/bin/xbps-install', 'name': 'xbps'}, {'path': '/usr/local/sbin/pkg', 'name': 'pkgng'}, {'path': '/usr/bin/swupd', 'name': 'swupd'}, {'path': '/usr/sbin/sorcery', 'name': 'sorcery'}, {'path': '/usr/bin/rpm-ostree', 'name': 'atomic_container'}, {'path': '/usr/bin/installp', 'name': 'installp'}, {'path': '/QOpenSys/pkgs/bin/yum', 'name': 'yum'}, ] class OpenBSDPkgMgrFactCollector(BaseFactCollector): name = 'pkg_mgr' _fact_ids = set() _platform = 'OpenBSD' def collect(self, module=None, collected_facts=None): facts_dict = {} facts_dict['pkg_mgr'] = 'openbsd_pkg' return facts_dict # the fact ends up being 'pkg_mgr' so stick with that naming/spelling class PkgMgrFactCollector(BaseFactCollector): name = 'pkg_mgr' _fact_ids = set() _platform = 'Generic' required_facts = set(['distribution']) def _check_rh_versions(self, pkg_mgr_name, collected_facts): if os.path.exists('/run/ostree-booted'): return "atomic_container" if collected_facts['ansible_distribution'] == 'Fedora': try: if int(collected_facts['ansible_distribution_major_version']) < 23: for yum in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'yum']: if os.path.exists(yum['path']): pkg_mgr_name = 'yum' break else: for dnf in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'dnf']: if os.path.exists(dnf['path']): pkg_mgr_name = 'dnf' break except ValueError: # If there's some new magical Fedora version in the future, # just default to dnf pkg_mgr_name = 'dnf' elif collected_facts['ansible_distribution'] == 'Amazon': pkg_mgr_name = 'yum' else: # If it's not one of the above and it's Red Hat family of distros, assume # RHEL or a clone. For versions of RHEL < 8 that Ansible supports, the # vendor supported official package manager is 'yum' and in RHEL 8+ # (as far as we know at the time of this writing) it is 'dnf'. # If anyone wants to force a non-official package manager then they # can define a provider to either the package or yum action plugins. if int(collected_facts['ansible_distribution_major_version']) < 8: pkg_mgr_name = 'yum' else: pkg_mgr_name = 'dnf' return pkg_mgr_name def _check_apt_flavor(self, pkg_mgr_name): # Check if '/usr/bin/apt' is APT-RPM or an ordinary (dpkg-based) APT. # There's rpm package on Debian, so checking if /usr/bin/rpm exists # is not enough. Instead ask RPM if /usr/bin/apt-get belongs to some # RPM package. rpm_query = '/usr/bin/rpm -q --whatprovides /usr/bin/apt-get'.split() if os.path.exists('/usr/bin/rpm'): with open(os.devnull, 'w') as null: try: subprocess.check_call(rpm_query, stdout=null, stderr=null) pkg_mgr_name = 'apt_rpm' except subprocess.CalledProcessError: # No apt-get in RPM database. Looks like Debian/Ubuntu # with rpm package installed pkg_mgr_name = 'apt' return pkg_mgr_name def collect(self, module=None, collected_facts=None): facts_dict = {} collected_facts = collected_facts or {} pkg_mgr_name = 'unknown' for pkg in PKG_MGRS: if os.path.exists(pkg['path']): pkg_mgr_name = pkg['name'] # Handle distro family defaults when more than one package manager is # installed or available to the distro, the ansible_fact entry should be # the default package manager officially supported by the distro. if collected_facts['ansible_os_family'] == "RedHat": pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts) elif collected_facts['ansible_os_family'] == 'Debian' and pkg_mgr_name != 'apt': # It's possible to install yum, dnf, zypper, rpm, etc inside of # Debian. Doing so does not mean the system wants to use them. pkg_mgr_name = 'apt' elif collected_facts['ansible_os_family'] == 'Altlinux': if pkg_mgr_name == 'apt': pkg_mgr_name = 'apt_rpm' # Check if /usr/bin/apt-get is ordinary (dpkg-based) APT or APT-RPM if pkg_mgr_name == 'apt': pkg_mgr_name = self._check_apt_flavor(pkg_mgr_name) facts_dict['pkg_mgr'] = pkg_mgr_name return facts_dict
closed
ansible/ansible
https://github.com/ansible/ansible
74,144
strategy contains deprecated call to be removed in 2.12
##### SUMMARY strategy contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/plugins/strategy/__init__.py:937:16: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/plugins/strategy/__init__.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74144
https://github.com/ansible/ansible/pull/74780
71e33d25784ba9ba1c9f338faec64f2b346c937e
bc48eba896e21f1c2827259e5430c2dd890280c9
2021-04-05T20:34:07Z
python
2021-05-20T19:32:52Z
changelogs/fragments/74144-remove-include-vartags.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,144
strategy contains deprecated call to be removed in 2.12
##### SUMMARY strategy contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/plugins/strategy/__init__.py:937:16: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/plugins/strategy/__init__.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74144
https://github.com/ansible/ansible/pull/74780
71e33d25784ba9ba1c9f338faec64f2b346c937e
bc48eba896e21f1c2827259e5430c2dd890280c9
2021-04-05T20:34:07Z
python
2021-05-20T19:32:52Z
lib/ansible/plugins/strategy/__init__.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import cmd import functools import os import pprint import sys import threading import time from collections import deque from multiprocessing import Lock from jinja2.exceptions import UndefinedError from ansible import constants as C from ansible import context from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable from ansible.executor import action_write_locks from ansible.executor.process.worker import WorkerProcess from ansible.executor.task_result import TaskResult from ansible.executor.task_queue_manager import CallbackSend from ansible.module_utils.six.moves import queue as Queue from ansible.module_utils.six import iteritems, itervalues, string_types from ansible.module_utils._text import to_text from ansible.module_utils.connection import Connection, ConnectionError from ansible.playbook.conditional import Conditional from ansible.playbook.handler import Handler from ansible.playbook.helpers import load_list_of_blocks from ansible.playbook.included_file import IncludedFile from ansible.playbook.task_include import TaskInclude from ansible.plugins import loader as plugin_loader from ansible.template import Templar from ansible.utils.display import Display from ansible.utils.unsafe_proxy import wrap_var from ansible.utils.vars import combine_vars from ansible.vars.clean import strip_internal_keys, module_response_deepcopy display = Display() __all__ = ['StrategyBase'] # This list can be an exact match, or start of string bound # does not accept regex ALWAYS_DELEGATE_FACT_PREFIXES = frozenset(( 'discovered_interpreter_', )) class StrategySentinel: pass _sentinel = StrategySentinel() def post_process_whens(result, task, templar): cond = None if task.changed_when: cond = Conditional(loader=templar._loader) cond.when = task.changed_when result['changed'] = cond.evaluate_conditional(templar, templar.available_variables) if task.failed_when: if cond is None: cond = Conditional(loader=templar._loader) cond.when = task.failed_when failed_when_result = cond.evaluate_conditional(templar, templar.available_variables) result['failed_when_result'] = result['failed'] = failed_when_result def results_thread_main(strategy): while True: try: result = strategy._final_q.get() if isinstance(result, StrategySentinel): break elif isinstance(result, CallbackSend): for arg in result.args: if isinstance(arg, TaskResult): strategy.normalize_task_result(arg) break strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs) elif isinstance(result, TaskResult): strategy.normalize_task_result(result) with strategy._results_lock: # only handlers have the listen attr, so this must be a handler # we split up the results into two queues here to make sure # handler and regular result processing don't cross wires if 'listen' in result._task_fields: strategy._handler_results.append(result) else: strategy._results.append(result) else: display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result)) except (IOError, EOFError): break except Queue.Empty: pass def debug_closure(func): """Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger""" @functools.wraps(func) def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False): status_to_stats_map = ( ('is_failed', 'failures'), ('is_unreachable', 'dark'), ('is_changed', 'changed'), ('is_skipped', 'skipped'), ) # We don't know the host yet, copy the previous states, for lookup after we process new results prev_host_states = iterator._host_states.copy() results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers) _processed_results = [] for result in results: task = result._task host = result._host _queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None) task_vars = _queued_task_args['task_vars'] play_context = _queued_task_args['play_context'] # Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state try: prev_host_state = prev_host_states[host.name] except KeyError: prev_host_state = iterator.get_host_state(host) while result.needs_debugger(globally_enabled=self.debugger_active): next_action = NextAction() dbg = Debugger(task, host, task_vars, play_context, result, next_action) dbg.cmdloop() if next_action.result == NextAction.REDO: # rollback host state self._tqm.clear_failed_hosts() iterator._host_states[host.name] = prev_host_state for method, what in status_to_stats_map: if getattr(result, method)(): self._tqm._stats.decrement(what, host.name) self._tqm._stats.decrement('ok', host.name) # redo self._queue_task(host, task, task_vars, play_context) _processed_results.extend(debug_closure(func)(self, iterator, one_pass)) break elif next_action.result == NextAction.CONTINUE: _processed_results.append(result) break elif next_action.result == NextAction.EXIT: # Matches KeyboardInterrupt from bin/ansible sys.exit(99) else: _processed_results.append(result) return _processed_results return inner class StrategyBase: ''' This is the base class for strategy plugins, which contains some common code useful to all strategies like running handlers, cleanup actions, etc. ''' # by default, strategies should support throttling but we allow individual # strategies to disable this and either forego supporting it or managing # the throttling internally (as `free` does) ALLOW_BASE_THROTTLING = True def __init__(self, tqm): self._tqm = tqm self._inventory = tqm.get_inventory() self._workers = tqm._workers self._variable_manager = tqm.get_variable_manager() self._loader = tqm.get_loader() self._final_q = tqm._final_q self._step = context.CLIARGS.get('step', False) self._diff = context.CLIARGS.get('diff', False) # the task cache is a dictionary of tuples of (host.name, task._uuid) # used to find the original task object of in-flight tasks and to store # the task args/vars and play context info used to queue the task. self._queued_task_cache = {} # Backwards compat: self._display isn't really needed, just import the global display and use that. self._display = display # internal counters self._pending_results = 0 self._pending_handler_results = 0 self._cur_worker = 0 # this dictionary is used to keep track of hosts that have # outstanding tasks still in queue self._blocked_hosts = dict() # this dictionary is used to keep track of hosts that have # flushed handlers self._flushed_hosts = dict() self._results = deque() self._handler_results = deque() self._results_lock = threading.Condition(threading.Lock()) # create the result processing thread for reading results in the background self._results_thread = threading.Thread(target=results_thread_main, args=(self,)) self._results_thread.daemon = True self._results_thread.start() # holds the list of active (persistent) connections to be shutdown at # play completion self._active_connections = dict() # Caches for get_host calls, to avoid calling excessively # These values should be set at the top of the ``run`` method of each # strategy plugin. Use ``_set_hosts_cache`` to set these values self._hosts_cache = [] self._hosts_cache_all = [] self.debugger_active = C.ENABLE_TASK_DEBUGGER def _set_hosts_cache(self, play, refresh=True): """Responsible for setting _hosts_cache and _hosts_cache_all See comment in ``__init__`` for the purpose of these caches """ if not refresh and all((self._hosts_cache, self._hosts_cache_all)): return if not play.finalized and Templar(None).is_template(play.hosts): _pattern = 'all' else: _pattern = play.hosts or 'all' self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)] self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)] def cleanup(self): # close active persistent connections for sock in itervalues(self._active_connections): try: conn = Connection(sock) conn.reset() except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) self._final_q.put(_sentinel) self._results_thread.join() def run(self, iterator, play_context, result=0): # execute one more pass through the iterator without peeking, to # make sure that all of the hosts are advanced to their final task. # This should be safe, as everything should be ITERATING_COMPLETE by # this point, though the strategy may not advance the hosts itself. for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: iterator.get_next_task_for_host(self._inventory.hosts[host]) except KeyError: iterator.get_next_task_for_host(self._inventory.get_host(host)) # save the failed/unreachable hosts, as the run_handlers() # method will clear that information during its execution failed_hosts = iterator.get_failed_hosts() unreachable_hosts = self._tqm._unreachable_hosts.keys() display.debug("running handlers") handler_result = self.run_handlers(iterator, play_context) if isinstance(handler_result, bool) and not handler_result: result |= self._tqm.RUN_ERROR elif not handler_result: result |= handler_result # now update with the hosts (if any) that failed or were # unreachable during the handler execution phase failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts()) unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys()) # return the appropriate code, depending on the status hosts after the run if not isinstance(result, bool) and result != self._tqm.RUN_OK: return result elif len(unreachable_hosts) > 0: return self._tqm.RUN_UNREACHABLE_HOSTS elif len(failed_hosts) > 0: return self._tqm.RUN_FAILED_HOSTS else: return self._tqm.RUN_OK def get_hosts_remaining(self, play): self._set_hosts_cache(play, refresh=False) ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts) return [host for host in self._hosts_cache if host not in ignore] def get_failed_hosts(self, play): self._set_hosts_cache(play, refresh=False) return [host for host in self._hosts_cache if host in self._tqm._failed_hosts] def add_tqm_variables(self, vars, play): ''' Base class method to add extra variables/information to the list of task vars sent through the executor engine regarding the task queue manager state. ''' vars['ansible_current_hosts'] = self.get_hosts_remaining(play) vars['ansible_failed_hosts'] = self.get_failed_hosts(play) def _queue_task(self, host, task, task_vars, play_context): ''' handles queueing the task up to be sent to a worker ''' display.debug("entering _queue_task() for %s/%s" % (host.name, task.action)) # Add a write lock for tasks. # Maybe this should be added somewhere further up the call stack but # this is the earliest in the code where we have task (1) extracted # into its own variable and (2) there's only a single code path # leading to the module being run. This is called by three # functions: __init__.py::_do_handler_run(), linear.py::run(), and # free.py::run() so we'd have to add to all three to do it there. # The next common higher level is __init__.py::run() and that has # tasks inside of play_iterator so we'd have to extract them to do it # there. if task.action not in action_write_locks.action_write_locks: display.debug('Creating lock for %s' % task.action) action_write_locks.action_write_locks[task.action] = Lock() # create a templar and template things we need later for the queuing process templar = Templar(loader=self._loader, variables=task_vars) try: throttle = int(templar.template(task.throttle)) except Exception as e: raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e) # and then queue the new task try: # Determine the "rewind point" of the worker list. This means we start # iterating over the list of workers until the end of the list is found. # Normally, that is simply the length of the workers list (as determined # by the forks or serial setting), however a task/block/play may "throttle" # that limit down. rewind_point = len(self._workers) if throttle > 0 and self.ALLOW_BASE_THROTTLING: if task.run_once: display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name()) else: if throttle <= rewind_point: display.debug("task: %s, throttle: %d" % (task.get_name(), throttle)) rewind_point = throttle queued = False starting_worker = self._cur_worker while True: if self._cur_worker >= rewind_point: self._cur_worker = 0 worker_prc = self._workers[self._cur_worker] if worker_prc is None or not worker_prc.is_alive(): self._queued_task_cache[(host.name, task._uuid)] = { 'host': host, 'task': task, 'task_vars': task_vars, 'play_context': play_context } worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader) self._workers[self._cur_worker] = worker_prc self._tqm.send_callback('v2_runner_on_start', host, task) worker_prc.start() display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers))) queued = True self._cur_worker += 1 if self._cur_worker >= rewind_point: self._cur_worker = 0 if queued: break elif self._cur_worker == starting_worker: time.sleep(0.0001) if isinstance(task, Handler): self._pending_handler_results += 1 else: self._pending_results += 1 except (EOFError, IOError, AssertionError) as e: # most likely an abort display.debug("got an error while queuing: %s" % e) return display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action)) def get_task_hosts(self, iterator, task_host, task): if task.run_once: host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts] else: host_list = [task_host.name] return host_list def get_delegated_hosts(self, result, task): host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None) return [host_name or task.delegate_to] def _set_always_delegated_facts(self, result, task): """Sets host facts for ``delegate_to`` hosts for facts that should always be delegated This operation mutates ``result`` to remove the always delegated facts See ``ALWAYS_DELEGATE_FACT_PREFIXES`` """ if task.delegate_to is None: return facts = result['ansible_facts'] always_keys = set() _add = always_keys.add for fact_key in facts: for always_key in ALWAYS_DELEGATE_FACT_PREFIXES: if fact_key.startswith(always_key): _add(fact_key) if always_keys: _pop = facts.pop always_facts = { 'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys) } host_list = self.get_delegated_hosts(result, task) _set_host_facts = self._variable_manager.set_host_facts for target_host in host_list: _set_host_facts(target_host, always_facts) def normalize_task_result(self, task_result): """Normalize a TaskResult to reference actual Host and Task objects when only given the ``Host.name``, or the ``Task._uuid`` Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns Mutates the original object """ if isinstance(task_result._host, string_types): # If the value is a string, it is ``Host.name`` task_result._host = self._inventory.get_host(to_text(task_result._host)) if isinstance(task_result._task, string_types): # If the value is a string, it is ``Task._uuid`` queue_cache_entry = (task_result._host.name, task_result._task) found_task = self._queued_task_cache.get(queue_cache_entry)['task'] original_task = found_task.copy(exclude_parent=True, exclude_tasks=True) original_task._parent = found_task._parent original_task.from_attrs(task_result._task_fields) task_result._task = original_task return task_result @debug_closure def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False): ''' Reads results off the final queue and takes appropriate action based on the result (executing callbacks, updating state, etc.). ''' ret_results = [] handler_templar = Templar(self._loader) def search_handler_blocks_by_name(handler_name, handler_blocks): # iterate in reversed order since last handler loaded with the same name wins for handler_block in reversed(handler_blocks): for handler_task in handler_block.block: if handler_task.name: if not handler_task.cached_name: if handler_templar.is_template(handler_task.name): handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play, task=handler_task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) handler_task.name = handler_templar.template(handler_task.name) handler_task.cached_name = True try: # first we check with the full result of get_name(), which may # include the role name (if the handler is from a role). If that # is not found, we resort to the simple name field, which doesn't # have anything extra added to it. candidates = ( handler_task.name, handler_task.get_name(include_role_fqcn=False), handler_task.get_name(include_role_fqcn=True), ) if handler_name in candidates: return handler_task except (UndefinedError, AnsibleUndefinedVariable): # We skip this handler due to the fact that it may be using # a variable in the name that was conditionally included via # set_fact or some other method, and we don't want to error # out unnecessarily continue return None cur_pass = 0 while True: try: self._results_lock.acquire() if do_handlers: task_result = self._handler_results.popleft() else: task_result = self._results.popleft() except IndexError: break finally: self._results_lock.release() original_host = task_result._host original_task = task_result._task # all host status messages contain 2 entries: (msg, task_result) role_ran = False if task_result.is_failed(): role_ran = True ignore_errors = original_task.ignore_errors if not ignore_errors: display.debug("marking %s as failed" % original_host.name) if original_task.run_once: # if we're using run_once, we have to fail every host here for h in self._inventory.get_hosts(iterator._play.hosts): if h.name not in self._tqm._unreachable_hosts: iterator.mark_host_failed(h) else: iterator.mark_host_failed(original_host) # grab the current state and if we're iterating on the rescue portion # of a block then we save the failed task in a special var for use # within the rescue/always state, _ = iterator.get_next_task_for_host(original_host, peek=True) if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE: self._tqm._failed_hosts[original_host.name] = True # Use of get_active_state() here helps detect proper state if, say, we are in a rescue # block from an included file (include_tasks). In a non-included rescue case, a rescue # that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also # check the current state block tree to see if any blocks are rescuing. if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or iterator.is_any_block_rescuing(state)): self._tqm._stats.increment('rescued', original_host.name) self._variable_manager.set_nonpersistent_facts( original_host.name, dict( ansible_failed_task=wrap_var(original_task.serialize()), ansible_failed_result=task_result._result, ), ) else: self._tqm._stats.increment('failures', original_host.name) else: self._tqm._stats.increment('ok', original_host.name) self._tqm._stats.increment('ignored', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors) elif task_result.is_unreachable(): ignore_unreachable = original_task.ignore_unreachable if not ignore_unreachable: self._tqm._unreachable_hosts[original_host.name] = True iterator._play._removed_hosts.append(original_host.name) else: self._tqm._stats.increment('skipped', original_host.name) task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name self._tqm._stats.increment('dark', original_host.name) self._tqm.send_callback('v2_runner_on_unreachable', task_result) elif task_result.is_skipped(): self._tqm._stats.increment('skipped', original_host.name) self._tqm.send_callback('v2_runner_on_skipped', task_result) else: role_ran = True if original_task.loop: # this task had a loop, and has more than one result, so # loop over all of them instead of a single result result_items = task_result._result.get('results', []) else: result_items = [task_result._result] for result_item in result_items: if '_ansible_notify' in result_item: if task_result.is_changed(): # The shared dictionary for notified handlers is a proxy, which # does not detect when sub-objects within the proxy are modified. # So, per the docs, we reassign the list so the proxy picks up and # notifies all other threads for handler_name in result_item['_ansible_notify']: found = False # Find the handler using the above helper. First we look up the # dependency chain of the current task (if it's from a role), otherwise # we just look through the list of handlers in the current play/all # roles and use the first one that matches the notify name target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers) if target_handler is not None: found = True if target_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host) for listening_handler_block in iterator._play.handlers: for listening_handler in listening_handler_block.block: listeners = getattr(listening_handler, 'listen', []) or [] if not listeners: continue listeners = listening_handler.get_validated_value( 'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar ) if handler_name not in listeners: continue else: found = True if listening_handler.notify_host(original_host): self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host) # and if none were found, then we raise an error if not found: msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening " "handlers list" % handler_name) if C.ERROR_ON_MISSING_HANDLER: raise AnsibleError(msg) else: display.warning(msg) if 'add_host' in result_item: # this task added a new host (add_host module) new_host_info = result_item.get('add_host', dict()) self._add_host(new_host_info, result_item) post_process_whens(result_item, original_task, handler_templar) elif 'add_group' in result_item: # this task added a new group (group_by module) self._add_group(original_host, result_item) post_process_whens(result_item, original_task, handler_templar) if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG: # if delegated fact and we are delegating facts, we need to change target host for them if original_task.delegate_to is not None and original_task.delegate_facts: host_list = self.get_delegated_hosts(result_item, original_task) else: # Set facts that should always be on the delegated hosts self._set_always_delegated_facts(result_item, original_task) host_list = self.get_task_hosts(iterator, original_host, original_task) if original_task.action in C._ACTION_INCLUDE_VARS: for (var_name, var_value) in iteritems(result_item['ansible_facts']): # find the host we're actually referring too here, which may # be a host that is not really in inventory at all for target_host in host_list: self._variable_manager.set_host_variable(target_host, var_name, var_value) else: cacheable = result_item.pop('_ansible_facts_cacheable', False) for target_host in host_list: # so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact' # to avoid issues with precedence and confusion with set_fact normal operation, # we set BOTH fact and nonpersistent_facts (aka hostvar) # when fact is retrieved from cache in subsequent operations it will have the lower precedence, # but for playbook setting it the 'higher' precedence is kept is_set_fact = original_task.action in C._ACTION_SET_FACT if not is_set_fact or cacheable: self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy()) if is_set_fact: self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy()) if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']: if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']: host_list = self.get_task_hosts(iterator, original_host, original_task) else: host_list = [None] data = result_item['ansible_stats']['data'] aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate'] for myhost in host_list: for k in data.keys(): if aggregate: self._tqm._stats.update_custom_stats(k, data[k], myhost) else: self._tqm._stats.set_custom_stats(k, data[k], myhost) if 'diff' in task_result._result: if self._diff or getattr(original_task, 'diff', False): self._tqm.send_callback('v2_on_file_diff', task_result) if not isinstance(original_task, TaskInclude): self._tqm._stats.increment('ok', original_host.name) if 'changed' in task_result._result and task_result._result['changed']: self._tqm._stats.increment('changed', original_host.name) # finally, send the ok for this task self._tqm.send_callback('v2_runner_on_ok', task_result) # register final results if original_task.register: host_list = self.get_task_hosts(iterator, original_host, original_task) clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result)) if 'invocation' in clean_copy: del clean_copy['invocation'] for target_host in host_list: self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy}) if do_handlers: self._pending_handler_results -= 1 else: self._pending_results -= 1 if original_host.name in self._blocked_hosts: del self._blocked_hosts[original_host.name] # If this is a role task, mark the parent role as being run (if # the task was ok or failed, but not skipped or unreachable) if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:? # lookup the role in the ROLE_CACHE to make sure we're dealing # with the correct object and mark it as executed for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]): if role_obj._uuid == original_task._role._uuid: role_obj._had_task_run[original_host.name] = True ret_results.append(task_result) if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes: break cur_pass += 1 return ret_results def _wait_on_handler_results(self, iterator, handler, notified_hosts): ''' Wait for the handler tasks to complete, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] handler_results = 0 display.debug("waiting for handler results...") while (self._pending_handler_results > 0 and handler_results < len(notified_hosts) and not self._tqm._terminated): if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator, do_handlers=True) ret_results.extend(results) handler_results += len([ r._host for r in results if r._host in notified_hosts and r.task_name == handler.name]) if self._pending_handler_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending handlers, returning what we have") return ret_results def _wait_on_pending_results(self, iterator): ''' Wait for the shared counter to drop to zero, using a short sleep between checks to ensure we don't spin lock ''' ret_results = [] display.debug("waiting for pending results...") while self._pending_results > 0 and not self._tqm._terminated: if self._tqm.has_dead_workers(): raise AnsibleError("A worker was found in a dead state") results = self._process_pending_results(iterator) ret_results.extend(results) if self._pending_results > 0: time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL) display.debug("no more pending results, returning what we have") return ret_results def _add_host(self, host_info, result_item): ''' Helper function to add a new host to inventory based on a task result. ''' changed = False if host_info: host_name = host_info.get('host_name') # Check if host in inventory, add if not if host_name not in self._inventory.hosts: self._inventory.add_host(host_name, 'all') self._hosts_cache_all.append(host_name) changed = True new_host = self._inventory.hosts.get(host_name) # Set/update the vars for this host new_host_vars = new_host.get_vars() new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict())) if new_host_vars != new_host_combined_vars: new_host.vars = new_host_combined_vars changed = True new_groups = host_info.get('groups', []) for group_name in new_groups: if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) changed = True new_group = self._inventory.groups[group_name] if new_group.add_host(self._inventory.hosts[host_name]): changed = True # reconcile inventory, ensures inventory rules are followed if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _add_group(self, host, result_item): ''' Helper function to add a group (if it does not exist), and to assign the specified host to that group. ''' changed = False # the host here is from the executor side, which means it was a # serialized/cloned copy and we'll need to look up the proper # host object from the master inventory real_host = self._inventory.hosts.get(host.name) if real_host is None: if host.name == self._inventory.localhost.name: real_host = self._inventory.localhost else: raise AnsibleError('%s cannot be matched in inventory' % host.name) group_name = result_item.get('add_group') parent_group_names = result_item.get('parent_groups', []) if group_name not in self._inventory.groups: group_name = self._inventory.add_group(group_name) for name in parent_group_names: if name not in self._inventory.groups: # create the new group and add it to inventory self._inventory.add_group(name) changed = True group = self._inventory.groups[group_name] for parent_group_name in parent_group_names: parent_group = self._inventory.groups[parent_group_name] new = parent_group.add_child_group(group) if new and not changed: changed = True if real_host not in group.get_hosts(): changed = group.add_host(real_host) if group not in real_host.get_groups(): changed = real_host.add_group(group) if changed: self._inventory.reconcile_inventory() result_item['changed'] = changed def _copy_included_file(self, included_file): ''' A proven safe and performant way to create a copy of an included file ''' ti_copy = included_file._task.copy(exclude_parent=True) ti_copy._parent = included_file._task._parent temp_vars = ti_copy.vars.copy() temp_vars.update(included_file._vars) ti_copy.vars = temp_vars return ti_copy def _load_included_file(self, included_file, iterator, is_handler=False): ''' Loads an included YAML file of tasks, applying the optional set of variables. ''' display.debug("loading included file: %s" % included_file._filename) try: data = self._loader.load_from_file(included_file._filename) if data is None: return [] elif not isinstance(data, list): raise AnsibleError("included task files must contain a list of tasks") ti_copy = self._copy_included_file(included_file) # pop tags out of the include args, if they were specified there, and assign # them to the include. If the include already had tags specified, we raise an # error so that users know not to specify them both ways tags = included_file._task.vars.pop('tags', []) if isinstance(tags, string_types): tags = tags.split(',') if len(tags) > 0: if len(included_file._task.tags) > 0: raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). " "Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement", obj=included_file._task._ds) display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option", version='2.12', collection_name='ansible.builtin') included_file._task.tags = tags block_list = load_list_of_blocks( data, play=iterator._play, parent_block=ti_copy.build_parent_block(), role=included_file._task._role, use_handlers=is_handler, loader=self._loader, variable_manager=self._variable_manager, ) # since we skip incrementing the stats when the task result is # first processed, we do so now for each host in the list for host in included_file._hosts: self._tqm._stats.increment('ok', host.name) except AnsibleError as e: if isinstance(e, AnsibleFileNotFound): reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name) else: reason = to_text(e) # mark all of the hosts including this file as failed, send callbacks, # and increment the stats for this host for host in included_file._hosts: tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason)) iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True self._tqm._stats.increment('failures', host.name) self._tqm.send_callback('v2_runner_on_failed', tr) return [] # finally, send the callback and return the list of blocks loaded self._tqm.send_callback('v2_playbook_on_include', included_file) display.debug("done processing included file") return block_list def run_handlers(self, iterator, play_context): ''' Runs handlers on those hosts which have been notified. ''' result = self._tqm.RUN_OK for handler_block in iterator._play.handlers: # FIXME: handlers need to support the rescue/always portions of blocks too, # but this may take some work in the iterator and gets tricky when # we consider the ability of meta tasks to flush handlers for handler in handler_block.block: if handler.notified_hosts: result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context) if not result: break return result def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None): # FIXME: need to use iterator.get_failed_hosts() instead? # if not len(self.get_hosts_remaining(iterator._play)): # self._tqm.send_callback('v2_playbook_on_no_hosts_remaining') # result = False # break if notified_hosts is None: notified_hosts = handler.notified_hosts[:] # strategy plugins that filter hosts need access to the iterator to identify failed hosts failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts) notified_hosts = self._filter_notified_hosts(notified_hosts) notified_hosts += failed_hosts if len(notified_hosts) > 0: self._tqm.send_callback('v2_playbook_on_handler_task_start', handler) bypass_host_loop = False try: action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections) if getattr(action, 'BYPASS_HOST_LOOP', False): bypass_host_loop = True except KeyError: # we don't care here, because the action may simply not have a # corresponding action plugin pass host_results = [] for host in notified_hosts: if not iterator.is_failed(host) or iterator._play.force_handlers: task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) self.add_tqm_variables(task_vars, play=iterator._play) templar = Templar(loader=self._loader, variables=task_vars) if not handler.cached_name: handler.name = templar.template(handler.name) handler.cached_name = True self._queue_task(host, handler, task_vars, play_context) if templar.template(handler.run_once) or bypass_host_loop: break # collect the results from the handler run host_results = self._wait_on_handler_results(iterator, handler, notified_hosts) included_files = IncludedFile.process_include_results( host_results, iterator=iterator, loader=self._loader, variable_manager=self._variable_manager ) result = True if len(included_files) > 0: for included_file in included_files: try: new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True) # for every task in each block brought in by the include, add the list # of hosts which included the file to the notified_handlers dict for block in new_blocks: iterator._play.handlers.append(block) for task in block.block: task_name = task.get_name() display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name)) task.notified_hosts = included_file._hosts[:] result = self._do_handler_run( handler=task, handler_name=task_name, iterator=iterator, play_context=play_context, notified_hosts=included_file._hosts[:], ) if not result: break except AnsibleError as e: for host in included_file._hosts: iterator.mark_host_failed(host) self._tqm._failed_hosts[host.name] = True display.warning(to_text(e)) continue # remove hosts from notification list handler.notified_hosts = [ h for h in handler.notified_hosts if h not in notified_hosts] display.debug("done running handlers, result is: %s" % result) return result def _filter_notified_failed_hosts(self, iterator, notified_hosts): return [] def _filter_notified_hosts(self, notified_hosts): ''' Filter notified hosts accordingly to strategy ''' # As main strategy is linear, we do not filter hosts # We return a copy to avoid race conditions return notified_hosts[:] def _take_step(self, task, host=None): ret = False msg = u'Perform task: %s ' % task if host: msg += u'on %s ' % host msg += u'(N)o/(y)es/(c)ontinue: ' resp = display.prompt(msg) if resp.lower() in ['y', 'yes']: display.debug("User ran task") ret = True elif resp.lower() in ['c', 'continue']: display.debug("User ran task and canceled step mode") self._step = False ret = True else: display.debug("User skipped task") display.banner(msg) return ret def _cond_not_supported_warn(self, task_name): display.warning("%s task does not support when conditional" % task_name) def _execute_meta(self, task, play_context, iterator, target_host): # meta tasks store their args in the _raw_params field of args, # since they do not use k=v pairs, so get that meta_action = task.args.get('_raw_params') def _evaluate_conditional(h): all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) return task.evaluate_conditional(templar, all_vars) skipped = False msg = '' skip_reason = '%s conditional evaluated to False' % meta_action self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False) # These don't support "when" conditionals if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when: self._cond_not_supported_warn(meta_action) if meta_action == 'noop': msg = "noop" elif meta_action == 'flush_handlers': self._flushed_hosts[target_host] = True self.run_handlers(iterator, play_context) self._flushed_hosts[target_host] = False msg = "ran handlers" elif meta_action == 'refresh_inventory': self._inventory.refresh_inventory() self._set_hosts_cache(iterator._play) msg = "inventory successfully refreshed" elif meta_action == 'clear_facts': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): hostname = host.get_name() self._variable_manager.clear_facts(hostname) msg = "facts cleared" else: skipped = True skip_reason += ', not clearing facts and fact cache for %s' % target_host.name elif meta_action == 'clear_host_errors': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): self._tqm._failed_hosts.pop(host.name, False) self._tqm._unreachable_hosts.pop(host.name, False) iterator._host_states[host.name].fail_state = iterator.FAILED_NONE msg = "cleared host errors" else: skipped = True skip_reason += ', not clearing host error state for %s' % target_host.name elif meta_action == 'end_play': if _evaluate_conditional(target_host): for host in self._inventory.get_hosts(iterator._play.hosts): if host.name not in self._tqm._unreachable_hosts: iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE msg = "ending play" else: skipped = True skip_reason += ', continuing play' elif meta_action == 'end_host': if _evaluate_conditional(target_host): iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE iterator._play._removed_hosts.append(target_host.name) msg = "ending play for %s" % target_host.name else: skipped = True skip_reason += ", continuing execution for %s" % target_host.name # TODO: Nix msg here? Left for historical reasons, but skip_reason exists now. msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name elif meta_action == 'role_complete': # Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286? # How would this work with allow_duplicates?? if task.implicit: if target_host.name in task._role._had_task_run: task._role._completed[target_host.name] = True msg = 'role_complete for %s' % target_host.name elif meta_action == 'reset_connection': all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task, _hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all) templar = Templar(loader=self._loader, variables=all_vars) # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not play_context.remote_addr: play_context.remote_addr = target_host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. play_context.update_vars(all_vars) if target_host in self._active_connections: connection = Connection(self._active_connections[target_host]) del self._active_connections[target_host] else: connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull) connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars) play_context.set_attributes_from_plugin(connection) if connection: try: connection.reset() msg = 'reset connection' except ConnectionError as e: # most likely socket is already closed display.debug("got an error while closing persistent connection: %s" % e) else: msg = 'no connection, nothing to reset' else: raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds) result = {'msg': msg} if skipped: result['skipped'] = True result['skip_reason'] = skip_reason else: result['changed'] = False display.vv("META: %s" % msg) res = TaskResult(target_host, task, result) if skipped: self._tqm.send_callback('v2_runner_on_skipped', res) return [res] def get_hosts_left(self, iterator): ''' returns list of available hosts for this iterator by filtering out unreachables ''' hosts_left = [] for host in self._hosts_cache: if host not in self._tqm._unreachable_hosts: try: hosts_left.append(self._inventory.hosts[host]) except KeyError: hosts_left.append(self._inventory.get_host(host)) return hosts_left def update_active_connections(self, results): ''' updates the current active persistent connections ''' for r in results: if 'args' in r._task_fields: socket_path = r._task_fields['args'].get('_ansible_socket') if socket_path: if r._host not in self._active_connections: self._active_connections[r._host] = socket_path class NextAction(object): """ The next action after an interpreter's exit. """ REDO = 1 CONTINUE = 2 EXIT = 3 def __init__(self, result=EXIT): self.result = result class Debugger(cmd.Cmd): prompt_continuous = '> ' # multiple lines def __init__(self, task, host, task_vars, play_context, result, next_action): # cmd.Cmd is old-style class cmd.Cmd.__init__(self) self.prompt = '[%s] %s (debug)> ' % (host, task) self.intro = None self.scope = {} self.scope['task'] = task self.scope['task_vars'] = task_vars self.scope['host'] = host self.scope['play_context'] = play_context self.scope['result'] = result self.next_action = next_action def cmdloop(self): try: cmd.Cmd.cmdloop(self) except KeyboardInterrupt: pass do_h = cmd.Cmd.do_help def do_EOF(self, args): """Quit""" return self.do_quit(args) def do_quit(self, args): """Quit""" display.display('User interrupted execution') self.next_action.result = NextAction.EXIT return True do_q = do_quit def do_continue(self, args): """Continue to next result""" self.next_action.result = NextAction.CONTINUE return True do_c = do_continue def do_redo(self, args): """Schedule task for re-execution. The re-execution may not be the next result""" self.next_action.result = NextAction.REDO return True do_r = do_redo def do_update_task(self, args): """Recreate the task from ``task._ds``, and template with updated ``task_vars``""" templar = Templar(None, variables=self.scope['task_vars']) task = self.scope['task'] task = task.load_data(task._ds) task.post_validate(templar) self.scope['task'] = task do_u = do_update_task def evaluate(self, args): try: return eval(args, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def do_pprint(self, args): """Pretty Print""" try: result = self.evaluate(args) display.display(pprint.pformat(result)) except Exception: pass do_p = do_pprint def execute(self, args): try: code = compile(args + '\n', '<stdin>', 'single') exec(code, globals(), self.scope) except Exception: t, v = sys.exc_info()[:2] if isinstance(t, str): exc_type_name = t else: exc_type_name = t.__name__ display.display('***%s:%s' % (exc_type_name, repr(v))) raise def default(self, line): try: self.execute(line) except Exception: pass
closed
ansible/ansible
https://github.com/ansible/ansible
74,144
strategy contains deprecated call to be removed in 2.12
##### SUMMARY strategy contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/plugins/strategy/__init__.py:937:16: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/plugins/strategy/__init__.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74144
https://github.com/ansible/ansible/pull/74780
71e33d25784ba9ba1c9f338faec64f2b346c937e
bc48eba896e21f1c2827259e5430c2dd890280c9
2021-04-05T20:34:07Z
python
2021-05-20T19:32:52Z
test/sanity/ignore.txt
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes examples/play.yml shebang examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath examples/scripts/my_test.py shebang # example module but not in a normal module location examples/scripts/my_test_facts.py shebang # example module but not in a normal module location examples/scripts/my_test_info.py shebang # example module but not in a normal module location examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs lib/ansible/cli/console.py pylint:blacklisted-name lib/ansible/cli/scripts/ansible_cli_stub.py pylint:ansible-deprecated-version lib/ansible/cli/scripts/ansible_cli_stub.py shebang lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang lib/ansible/config/base.yml no-unwanted-files lib/ansible/executor/playbook_executor.py pylint:blacklisted-name lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name lib/ansible/keyword_desc.yml no-unwanted-files lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name lib/ansible/module_utils/compat/selinux.py import-2.6!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py no-assert lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/pycompat24.py no-get-exception lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py no-basestring lib/ansible/module_utils/six/__init__.py no-dict-iteritems lib/ansible/module_utils/six/__init__.py no-dict-iterkeys lib/ansible/module_utils/six/__init__.py no-dict-itervalues lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable lib/ansible/module_utils/six/__init__.py replace-urlopen lib/ansible/module_utils/urls.py pylint:blacklisted-name lib/ansible/module_utils/urls.py replace-urlopen lib/ansible/modules/apt.py validate-modules:parameter-invalid lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/async_status.py use-argspec-type-path lib/ansible/modules/async_status.py validate-modules!skip lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required lib/ansible/modules/async_wrapper.py use-argspec-type-path lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented lib/ansible/modules/command.py validate-modules:doc-missing-type lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/command.py validate-modules:undocumented-parameter lib/ansible/modules/copy.py pylint:blacklisted-name lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/copy.py validate-modules:undocumented-parameter lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch lib/ansible/modules/dnf.py validate-modules:parameter-invalid lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/file.py validate-modules:undocumented-parameter lib/ansible/modules/find.py use-argspec-type-path # fix needed lib/ansible/modules/git.py pylint:blacklisted-name lib/ansible/modules/git.py use-argspec-type-path lib/ansible/modules/git.py validate-modules:doc-missing-type lib/ansible/modules/git.py validate-modules:doc-required-mismatch lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema lib/ansible/modules/iptables.py pylint:blacklisted-name lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/pip.py pylint:blacklisted-name lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/service.py validate-modules:use-run-command-not-popen lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented lib/ansible/modules/stat.py validate-modules:parameter-invalid lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/stat.py validate-modules:undocumented-parameter lib/ansible/modules/systemd.py validate-modules:parameter-invalid lib/ansible/modules/systemd.py validate-modules:return-syntax-error lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/uri.py pylint:blacklisted-name lib/ansible/modules/uri.py validate-modules:doc-required-mismatch lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type lib/ansible/modules/user.py validate-modules:use-run-command-not-popen lib/ansible/modules/yum.py pylint:blacklisted-name lib/ansible/modules/yum.py validate-modules:parameter-invalid lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name lib/ansible/playbook/base.py pylint:blacklisted-name lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460 lib/ansible/playbook/helpers.py pylint:ansible-deprecated-version lib/ansible/playbook/helpers.py pylint:blacklisted-name lib/ansible/playbook/play_context.py pylint:ansible-deprecated-version lib/ansible/plugins/action/__init__.py pylint:ansible-deprecated-version lib/ansible/plugins/action/async_status.py pylint:ansible-deprecated-version lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility lib/ansible/plugins/inventory/script.py pylint:ansible-deprecated-version lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name lib/ansible/plugins/strategy/__init__.py pylint:ansible-deprecated-version lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name lib/ansible/vars/hostvars.py pylint:blacklisted-name test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level test/integration/targets/gathering_facts/library/bogus_facts shebang test/integration/targets/gathering_facts/library/facts_one shebang test/integration/targets/gathering_facts/library/facts_two shebang test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip test/integration/targets/json_cleanup/library/bad_json shebang test/integration/targets/lookup_csvfile/files/crlf.csv line-endings test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes test/integration/targets/module_precedence/lib_with_extension/a.ini shebang test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes test/integration/targets/template/files/foo.dos.txt line-endings test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes test/integration/targets/unicode/unicode.yml no-smart-quotes test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals test/support/integration/plugins/module_utils/database.py future-import-boilerplate test/support/integration/plugins/module_utils/database.py metaclass-boilerplate test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203 test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501 test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231 test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip test/units/executor/test_play_iterator.py pylint:blacklisted-name test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF test/units/module_utils/urls/test_Request.py replace-urlopen test/units/module_utils/urls/test_fetch_url.py replace-urlopen test/units/modules/test_apt.py pylint:blacklisted-name test/units/parsing/vault/test_vault.py pylint:blacklisted-name test/units/playbook/role/test_role.py pylint:blacklisted-name test/units/plugins/test_plugins.py pylint:blacklisted-name test/units/template/test_templar.py pylint:blacklisted-name test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
closed
ansible/ansible
https://github.com/ansible/ansible
73,490
document use of Windows' OpenSSH service
##### SUMMARY - Windows 10 now has a built in OpenSSH Server optional feature ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME `ansible/docs/docsite/rst/user_guide/windows_setup.rst` = <https://docs.ansible.com/ansible/latest/user_guide/windows_setup.html#windows-ssh-setup> ##### ANSIBLE VERSION ``` ansible 2.10.4 config file = None configured module search path = ['/Users/srl295/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/2.10.5/libexec/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.9.1 (default, Jan 8 2021, 17:17:43) [Clang 12.0.0 (clang-1200.0.32.28)] ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT Windows 10 ##### ADDITIONAL INFORMATION - https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse
https://github.com/ansible/ansible/issues/73490
https://github.com/ansible/ansible/pull/74765
bc48eba896e21f1c2827259e5430c2dd890280c9
015331518dff60f31a7d8ce24fc315e3ac9e86f8
2021-02-04T21:19:48Z
python
2021-05-21T05:30:39Z
docs/docsite/rst/user_guide/windows_setup.rst
.. _windows_setup: Setting up a Windows Host ========================= This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host. .. contents:: :local: Host Requirements ````````````````` For Ansible to communicate to a Windows host and use Windows modules, the Windows host must meet these requirements: * Ansible can generally manage Windows versions under current and extended support from Microsoft. Ansible can manage desktop OSs including Windows 7, 8.1, and 10, and server OSs including Windows Server 2008, 2008 R2, 2012, 2012 R2, 2016, and 2019. * Ansible requires PowerShell 3.0 or newer and at least .NET 4.0 to be installed on the Windows host. * A WinRM listener should be created and activated. More details for this can be found below. .. Note:: While these are the base requirements for Ansible connectivity, some Ansible modules have additional requirements, such as a newer OS or PowerShell version. Please consult the module's documentation page to determine whether a host meets those requirements. Upgrading PowerShell and .NET Framework --------------------------------------- Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this requirement. You can use the `Upgrade-PowerShell.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1>`_ script to update these. This is an example of how to run this script from PowerShell: .. code-block:: powershell [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 $url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1" $file = "$env:temp\Upgrade-PowerShell.ps1" $username = "Administrator" $password = "Password" (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file) Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force # Version can be 3.0, 4.0 or 5.1 &$file -Version 5.1 -Username $username -Password $password -Verbose Once completed, you will need to remove auto logon and set the execution policy back to the default (``Restricted `` for Windows clients, or ``RemoteSigned`` for Windows servers). You can do this with the following PowerShell commands: .. code-block:: powershell # This isn't needed but is a good security practice to complete Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force $reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0 Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue The script works by checking to see what programs need to be installed (such as .NET Framework 4.5.2) and what PowerShell version is required. If a reboot is required and the ``username`` and ``password`` parameters are set, the script will automatically reboot and logon when it comes back up from the reboot. The script will continue until no more actions are required and the PowerShell version matches the target version. If the ``username`` and ``password`` parameters are not set, the script will prompt the user to manually reboot and logon when required. When the user is next logged in, the script will continue where it left off and the process continues until no more actions are required. .. Note:: If running on Server 2008, then SP2 must be installed. If running on Server 2008 R2 or Windows 7, then SP1 must be installed. .. Note:: Windows Server 2008 can only install PowerShell 3.0; specifying a newer version will result in the script failing. .. Note:: The ``username`` and ``password`` parameters are stored in plain text in the registry. Make sure the cleanup commands are run after the script finishes to ensure no credentials are still stored on the host. WinRM Memory Hotfix ------------------- When running on PowerShell v3.0, there is a bug with the WinRM service that limits the amount of memory available to WinRM. Without this hotfix installed, Ansible will fail to execute certain commands on the Windows host. These hotfixes should be installed as part of the system bootstrapping or imaging process. The script `Install-WMF3Hotfix.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1>`_ can be used to install the hotfix on affected hosts. The following PowerShell command will install the hotfix: .. code-block:: powershell [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 $url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1" $file = "$env:temp\Install-WMF3Hotfix.ps1" (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file) powershell.exe -ExecutionPolicy ByPass -File $file -Verbose For more details, please refer to the `Hotfix document <https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh>`_ from Microsoft. WinRM Setup ``````````` Once Powershell has been upgraded to at least version 3.0, the final step is for the WinRM service to be configured so that Ansible can connect to it. There are two main components of the WinRM service that governs how Ansible can interface with the Windows host: the ``listener`` and the ``service`` configuration settings. Details about each component can be read below, but the script `ConfigureRemotingForAnsible.ps1 <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`_ can be used to set up the basics. This script sets up both HTTP and HTTPS listeners with a self-signed certificate and enables the ``Basic`` authentication option on the service. To use this script, run the following in PowerShell: .. code-block:: powershell [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 $url = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1" $file = "$env:temp\ConfigureRemotingForAnsible.ps1" (New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file) powershell.exe -ExecutionPolicy ByPass -File $file There are different switches and parameters (like ``-EnableCredSSP`` and ``-ForceNewSSLCert``) that can be set alongside this script. The documentation for these options are located at the top of the script itself. .. Note:: The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like ``Basic`` authentication) that can be inherently insecure. WinRM Listener -------------- The WinRM services listens for requests on one or more ports. Each of these ports must have a listener created and configured. To view the current listeners that are running on the WinRM service, run the following command: .. code-block:: powershell winrm enumerate winrm/config/Listener This will output something like:: Listener Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80:: ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7 Listener Address = * Transport = HTTPS Port = 5986 Hostname = SERVER2016 Enabled = true URLPrefix = wsman CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80:: ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7 In the example above there are two listeners activated; one is listening on port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of the key options that are useful to understand are: * ``Transport``: Whether the listener is run over HTTP or HTTPS, it is recommended to use a listener over HTTPS as the data is encrypted without any further changes required. * ``Port``: The port the listener runs on, by default it is ``5985`` for HTTP and ``5986`` for HTTPS. This port can be changed to whatever is required and corresponds to the host var ``ansible_port``. * ``URLPrefix``: The URL prefix to listen on, by default it is ``wsman``. If this is changed, the host var ``ansible_winrm_path`` must be set to the same value. * ``CertificateThumbprint``: If running over an HTTPS listener, this is the thumbprint of the certificate in the Windows Certificate Store that is used in the connection. To get the details of the certificate itself, run this command with the relevant certificate thumbprint in PowerShell:: $thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE" Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object * Setup WinRM Listener ++++++++++++++++++++ There are three ways to set up a WinRM listener: * Using ``winrm quickconfig`` for HTTP or ``winrm quickconfig -transport:https`` for HTTPS. This is the easiest option to use when running outside of a domain environment and a simple listener is required. Unlike the other options, this process also has the added benefit of opening up the Firewall for the ports required and starts the WinRM service. * Using Group Policy Objects. This is the best way to create a listener when the host is a member of a domain because the configuration is done automatically without any user input. For more information on group policy objects, see the `Group Policy Objects documentation <https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx>`_. * Using PowerShell to create the listener with a specific configuration. This can be done by running the following PowerShell commands: .. code-block:: powershell $selector_set = @{ Address = "*" Transport = "HTTPS" } $value_set = @{ CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE" } New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set To see the other options with this PowerShell cmdlet, see `New-WSManInstance <https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1>`_. .. Note:: When creating an HTTPS listener, an existing certificate needs to be created and stored in the ``LocalMachine\My`` certificate store. Without a certificate being present in this store, most commands will fail. Delete WinRM Listener +++++++++++++++++++++ To remove a WinRM listener:: # Remove all listeners Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force # Only remove listeners that are run over HTTPS Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force .. Note:: The ``Keys`` object is an array of strings, so it can contain different values. By default it contains a key for ``Transport=`` and ``Address=`` which correspond to the values from winrm enumerate winrm/config/Listeners. WinRM Service Options --------------------- There are a number of options that can be set to control the behavior of the WinRM service component, including authentication options and memory settings. To get an output of the current service configuration options, run the following command: .. code-block:: powershell winrm get winrm/config/Service winrm get winrm/config/Winrs This will output something like:: Service RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD) MaxConcurrentOperations = 4294967295 MaxConcurrentOperationsPerUser = 1500 EnumerationTimeoutms = 240000 MaxConnections = 300 MaxPacketRetrievalTimeSeconds = 120 AllowUnencrypted = false Auth Basic = true Kerberos = true Negotiate = true Certificate = true CredSSP = true CbtHardeningLevel = Relaxed DefaultPorts HTTP = 5985 HTTPS = 5986 IPv4Filter = * IPv6Filter = * EnableCompatibilityHttpListener = false EnableCompatibilityHttpsListener = false CertificateThumbprint AllowRemoteAccess = true Winrs AllowRemoteShellAccess = true IdleTimeout = 7200000 MaxConcurrentUsers = 2147483647 MaxShellRunTime = 2147483647 MaxProcessesPerShell = 2147483647 MaxMemoryPerShellMB = 2147483647 MaxShellsPerUser = 2147483647 While many of these options should rarely be changed, a few can easily impact the operations over WinRM and are useful to understand. Some of the important options are: * ``Service\AllowUnencrypted``: This option defines whether WinRM will allow traffic that is run over HTTP without message encryption. Message level encryption is only possible when ``ansible_winrm_transport`` is ``ntlm``, ``kerberos`` or ``credssp``. By default this is ``false`` and should only be set to ``true`` when debugging WinRM messages. * ``Service\Auth\*``: These flags define what authentication options are allowed with the WinRM service. By default, ``Negotiate (NTLM)`` and ``Kerberos`` are enabled. * ``Service\Auth\CbtHardeningLevel``: Specifies whether channel binding tokens are not verified (None), verified but not required (Relaxed), or verified and required (Strict). CBT is only used when connecting with NTLM or Kerberos over HTTPS. * ``Service\CertificateThumbprint``: This is the thumbprint of the certificate used to encrypt the TLS channel used with CredSSP authentication. By default this is empty; a self-signed certificate is generated when the WinRM service starts and is used in the TLS process. * ``Winrs\MaxShellRunTime``: This is the maximum time, in milliseconds, that a remote command is allowed to execute. * ``Winrs\MaxMemoryPerShellMB``: This is the maximum amount of memory allocated per shell, including the shell's child processes. To modify a setting under the ``Service`` key in PowerShell:: # substitute {path} with the path to the option after winrm/config/Service Set-Item -Path WSMan:\localhost\Service\{path} -Value "value here" # for example, to change Service\Auth\CbtHardeningLevel run Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict To modify a setting under the ``Winrs`` key in PowerShell:: # Substitute {path} with the path to the option after winrm/config/Winrs Set-Item -Path WSMan:\localhost\Shell\{path} -Value "value here" # For example, to change Winrs\MaxShellRunTime run Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647 .. Note:: If running in a domain environment, some of these options are set by GPO and cannot be changed on the host itself. When a key has been configured with GPO, it contains the text ``[Source="GPO"]`` next to the value. Common WinRM Issues ------------------- Because WinRM has a wide range of configuration options, it can be difficult to setup and configure. Because of this complexity, issues that are shown by Ansible could in fact be issues with the host setup instead. One easy way to determine whether a problem is a host issue is to run the following command from another Windows host to connect to the target Windows host:: # Test out HTTP winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig # Test out HTTPS (will fail if the cert is not verifiable) winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig # Test out HTTPS, ignoring certificate verification $username = "Username" $password = ConvertTo-SecureString -String "Password" -AsPlainText -Force $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password $session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option If this fails, the issue is probably related to the WinRM setup. If it works, the issue may not be related to the WinRM setup; please continue reading for more troubleshooting suggestions. HTTP 401/Credentials Rejected +++++++++++++++++++++++++++++ A HTTP 401 error indicates the authentication process failed during the initial connection. Some things to check for this are: * Verify that the credentials are correct and set properly in your inventory with ``ansible_user`` and ``ansible_password`` * Ensure that the user is a member of the local Administrators group or has been explicitly granted access (a connection test with the ``winrs`` command can be used to rule this out). * Make sure that the authentication option set by ``ansible_winrm_transport`` is enabled under ``Service\Auth\*`` * If running over HTTP and not HTTPS, use ``ntlm``, ``kerberos`` or ``credssp`` with ``ansible_winrm_message_encryption: auto`` to enable message encryption. If using another authentication option or if the installed pywinrm version cannot be upgraded, the ``Service\AllowUnencrypted`` can be set to ``true`` but this is only recommended for troubleshooting * Ensure the downstream packages ``pywinrm``, ``requests-ntlm``, ``requests-kerberos``, and/or ``requests-credssp`` are up to date using ``pip``. * If using Kerberos authentication, ensure that ``Service\Auth\CbtHardeningLevel`` is not set to ``Strict``. * When using Basic or Certificate authentication, make sure that the user is a local account and not a domain account. Domain accounts do not work with Basic and Certificate authentication. HTTP 500 Error ++++++++++++++ These indicate an error has occurred with the WinRM service. Some things to check for include: * Verify that the number of current open shells has not exceeded either ``WinRsMaxShellsPerUser`` or any of the other Winrs quotas haven't been exceeded. Timeout Errors +++++++++++++++ These usually indicate an error with the network connection where Ansible is unable to reach the host. Some things to check for include: * Make sure the firewall is not set to block the configured WinRM listener ports * Ensure that a WinRM listener is enabled on the port and path set by the host vars * Ensure that the ``winrm`` service is running on the Windows host and configured for automatic start Connection Refused Errors +++++++++++++++++++++++++ These usually indicate an error when trying to communicate with the WinRM service on the host. Some things to check for: * Ensure that the WinRM service is up and running on the host. Use ``(Get-Service -Name winrm).Status`` to get the status of the service. * Check that the host firewall is allowing traffic over the WinRM port. By default this is ``5985`` for HTTP and ``5986`` for HTTPS. Sometimes an installer may restart the WinRM or HTTP service and cause this error. The best way to deal with this is to use ``win_psexec`` from another Windows host. Failure to Load Builtin Modules +++++++++++++++++++++++++++++++ If powershell fails with an error message similar to ``The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.`` then there could be a problem trying to access all the paths specified by the ``PSModulePath`` environment variable. A common cause of this issue is that the ``PSModulePath`` environment variable contains a UNC path to a file share and because of the double hop/credential delegation issue the Ansible process cannot access these folders. The way around this problems is to either: * Remove the UNC path from the ``PSModulePath`` environment variable, or * Use an authentication option that supports credential delegation like ``credssp`` or ``kerberos`` with credential delegation enabled See `KB4076842 <https://support.microsoft.com/en-us/help/4076842>`_ for more information on this problem. Windows SSH Setup ````````````````` Ansible 2.8 has added an experimental SSH connection for Windows managed nodes. .. warning:: Use this feature at your own risk! Using SSH with Windows is experimental, the implementation may make backwards incompatible changes in feature releases. The server side components can be unreliable depending on the version that is installed. Installing Win32-OpenSSH ------------------------ The first step to using SSH with Windows is to install the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_ service on the Windows host. Microsoft offers a way to install ``Win32-OpenSSH`` through a Windows capability but currently the version that is installed through this process is too old to work with Ansible. To install ``Win32-OpenSSH`` for use with Ansible, select one of these three installation options: * Manually install the service, following the `install instructions <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ from Microsoft. * Install the `openssh <https://chocolatey.org/packages/openssh>`_ package using Chocolatey:: choco install --package-parameters=/SSHServerFeature openssh * Use ``win_chocolatey`` to install the service:: - name: install the Win32-OpenSSH service win_chocolatey: name: openssh package_params: /SSHServerFeature state: present * Use an existing Ansible Galaxy role like `jborean93.win_openssh <https://galaxy.ansible.com/jborean93/win_openssh>`_:: # Make sure the role has been downloaded first ansible-galaxy install jborean93.win_openssh # main.yml - name: install Win32-OpenSSH service hosts: windows gather_facts: no roles: - role: jborean93.win_openssh opt_openssh_setup_service: True .. note:: ``Win32-OpenSSH`` is still a beta product and is constantly being updated to include new features and bugfixes. If you are using SSH as a connection option for Windows, it is highly recommend you install the latest release from one of the 3 methods above. Configuring the Win32-OpenSSH shell ----------------------------------- By default ``Win32-OpenSSH`` will use ``cmd.exe`` as a shell. To configure a different shell, use an Ansible task to define the registry setting:: - name: set the default shell to PowerShell win_regedit: path: HKLM:\SOFTWARE\OpenSSH name: DefaultShell data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe type: string state: present # Or revert the settings back to the default, cmd - name: set the default shell to cmd win_regedit: path: HKLM:\SOFTWARE\OpenSSH name: DefaultShell state: absent Win32-OpenSSH Authentication ---------------------------- Win32-OpenSSH authentication with Windows is similar to SSH authentication on Unix/Linux hosts. You can use a plaintext password or SSH public key authentication, add public keys to an ``authorized_key`` file in the ``.ssh`` folder of the user's profile directory, and configure the service using the ``sshd_config`` file used by the SSH service as you would on a Unix/Linux host. When using SSH key authentication with Ansible, the remote session won't have access to the user's credentials and will fail when attempting to access a network resource. This is also known as the double-hop or credential delegation issue. There are two ways to work around this issue: * Use plaintext password auth by setting ``ansible_password`` * Use ``become`` on the task with the credentials of the user that needs access to the remote resource Configuring Ansible for SSH on Windows -------------------------------------- To configure Ansible to use SSH for Windows hosts, you must set two connection variables: * set ``ansible_connection`` to ``ssh`` * set ``ansible_shell_type`` to ``cmd`` or ``powershell`` The ``ansible_shell_type`` variable should reflect the ``DefaultShell`` configured on the Windows host. Set to ``cmd`` for the default shell or set to ``powershell`` if the ``DefaultShell`` has been changed to PowerShell. Known issues with SSH on Windows -------------------------------- Using SSH with Windows is experimental, and we expect to uncover more issues. Here are the known ones: * Win32-OpenSSH versions older than ``v7.9.0.0p1-Beta`` do not work when ``powershell`` is the shell type * While SCP should work, SFTP is the recommended SSH file transfer mechanism to use when copying or fetching a file .. seealso:: :ref:`about_playbooks` An introduction to playbooks :ref:`playbooks_best_practices` Tips and tricks for playbooks :ref:`List of Windows Modules <windows_modules>` Windows specific module list, all implemented in PowerShell `User Mailing List <https://groups.google.com/group/ansible-project>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
74,683
Vault variables undefined when using delegate_to (reopen)
### Summary This is a reopening of the issue https://github.com/ansible/ansible/issues/22737 which was said to be resolved by https://github.com/ansible/ansible/pull/70331. I reviewed the code and it appears that an attempt was made to resolve it but an error is being raised just before the new logic to use the delegated vars in the templar. When I ignore that error (by hacking task_executor.py), everything works as expected. The task_executor.py:_execute method is setting context_validation_error when working with the play_context near the beginning of the method. If I simply change the line: ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None: raise context_validation_error # pylint: disable=raising-bad-type ``` to ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None and not self._task.delegate_to: raise context_validation_error # pylint: disable=raising-bad-type ``` Then the templating with the delegated vars works as expected (I can have ansible_password: "{{ vault_ansible_password }}"). I'm not sure if this is the final solution you want, but it looks like an easy fix. Note that the same fix does not work in Ansible 2.9. Is there any plan to backport 70331 to 2.9 (along with this fix?). Thanks ### Issue Type Bug Report ### Component Name task_executor.py ### Ansible Version ```console $ ansible --version ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.8.3 (default, Aug 18 2020, 08:56:04) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <empty> ``` ### OS / Environment RHEL 8.2 ### Steps to Reproduce See 70331 ### Expected Results Ability to reference other variables inside connection variables. ### Actual Results ```console Undefined error. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74683
https://github.com/ansible/ansible/pull/74685
015331518dff60f31a7d8ce24fc315e3ac9e86f8
b518aabf81213dd4d8b5b46a1a0657b5d8408238
2021-05-13T04:43:05Z
python
2021-05-24T14:13:19Z
lib/ansible/executor/task_executor.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import re import pty import time import json import signal import subprocess import sys import termios import traceback from ansible import constants as C from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip from ansible.executor.task_result import TaskResult from ansible.executor.module_common import get_action_args_with_defaults from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.six import iteritems, string_types, binary_type from ansible.module_utils.six.moves import xrange from ansible.module_utils._text import to_text, to_native from ansible.module_utils.connection import write_to_file_descriptor from ansible.playbook.conditional import Conditional from ansible.playbook.task import Task from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.template import Templar from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var from ansible.vars.clean import namespace_facts, clean_facts from ansible.utils.display import Display from ansible.utils.vars import combine_vars, isidentifier display = Display() RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] __all__ = ['TaskExecutor'] class TaskTimeoutError(BaseException): pass def task_timeout(signum, frame): raise TaskTimeoutError def remove_omit(task_args, omit_token): ''' Remove args with a value equal to the ``omit_token`` recursively to align with now having suboptions in the argument_spec ''' if not isinstance(task_args, dict): return task_args new_args = {} for i in iteritems(task_args): if i[1] == omit_token: continue elif isinstance(i[1], dict): new_args[i[0]] = remove_omit(i[1], omit_token) elif isinstance(i[1], list): new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]] else: new_args[i[0]] = i[1] return new_args class TaskExecutor: ''' This is the main worker class for the executor pipeline, which handles loading an action plugin to actually dispatch the task to a given host. This class roughly corresponds to the old Runner() class. ''' def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q): self._host = host self._task = task self._job_vars = job_vars self._play_context = play_context self._new_stdin = new_stdin self._loader = loader self._shared_loader_obj = shared_loader_obj self._connection = None self._final_q = final_q self._loop_eval_error = None self._task.squash() def run(self): ''' The main executor entrypoint, where we determine if the specified task requires looping and either runs the task with self._run_loop() or self._execute(). After that, the returned results are parsed and returned as a dict. ''' display.debug("in run() - task %s" % self._task._uuid) try: try: items = self._get_loop_items() except AnsibleUndefinedVariable as e: # save the error raised here for use later items = None self._loop_eval_error = e if items is not None: if len(items) > 0: item_results = self._run_loop(items) # create the overall result item res = dict(results=item_results) # loop through the item results and set the global changed/failed/skipped result flags based on any item. res['skipped'] = True for item in item_results: if 'changed' in item and item['changed'] and not res.get('changed'): res['changed'] = True if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])): res['skipped'] = False if 'failed' in item and item['failed']: item_ignore = item.pop('_ansible_ignore_errors') if not res.get('failed'): res['failed'] = True res['msg'] = 'One or more items failed' self._task.ignore_errors = item_ignore elif self._task.ignore_errors and not item_ignore: self._task.ignore_errors = item_ignore # ensure to accumulate these for array in ['warnings', 'deprecations']: if array in item and item[array]: if array not in res: res[array] = [] if not isinstance(item[array], list): item[array] = [item[array]] res[array] = res[array] + item[array] del item[array] if not res.get('failed', False): res['msg'] = 'All items completed' if res['skipped']: res['msg'] = 'All items skipped' else: res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[]) else: display.debug("calling self._execute()") res = self._execute() display.debug("_execute() done") # make sure changed is set in the result, if it's not present if 'changed' not in res: res['changed'] = False def _clean_res(res, errors='surrogate_or_strict'): if isinstance(res, binary_type): return to_unsafe_text(res, errors=errors) elif isinstance(res, dict): for k in res: try: res[k] = _clean_res(res[k], errors=errors) except UnicodeError: if k == 'diff': # If this is a diff, substitute a replacement character if the value # is undecodable as utf8. (Fix #21804) display.warning("We were unable to decode all characters in the module return data." " Replaced some in an effort to return as much as possible") res[k] = _clean_res(res[k], errors='surrogate_then_replace') else: raise elif isinstance(res, list): for idx, item in enumerate(res): res[idx] = _clean_res(item, errors=errors) return res display.debug("dumping result to json") res = _clean_res(res) display.debug("done dumping result, returning") return res except AnsibleError as e: return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log) except Exception as e: return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log) finally: try: self._connection.close() except AttributeError: pass except Exception as e: display.debug(u"error closing connection: %s" % to_text(e)) def _get_loop_items(self): ''' Loads a lookup plugin to handle the with_* portion of a task (if specified), and returns the items result. ''' # get search path for this task to pass to lookup plugins self._job_vars['ansible_search_path'] = self._task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if self._loader.get_basedir() not in self._job_vars['ansible_search_path']: self._job_vars['ansible_search_path'].append(self._loader.get_basedir()) templar = Templar(loader=self._loader, variables=self._job_vars) items = None loop_cache = self._job_vars.get('_ansible_loop_cache') if loop_cache is not None: # _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to` # to avoid reprocessing the loop items = loop_cache elif self._task.loop_with: if self._task.loop_with in self._shared_loader_obj.lookup_loader: fail = True if self._task.loop_with == 'first_found': # first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing. fail = False loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail, convert_bare=False) if not fail: loop_terms = [t for t in loop_terms if not templar.is_template(t)] # get lookup mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar) # give lookup task 'context' for subdir (mostly needed for first_found) for subdir in ['template', 'var', 'file']: # TODO: move this to constants? if subdir in self._task.action: break setattr(mylookup, '_subdir', subdir + 's') # run lookup items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)) else: raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with) elif self._task.loop is not None: items = templar.template(self._task.loop) if not isinstance(items, list): raise AnsibleError( "Invalid data passed to 'loop', it requires a list, got this instead: %s." " Hint: If you passed a list/dict of just one element," " try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items ) return items def _run_loop(self, items): ''' Runs the task with the loop items specified and collates the result into an array named 'results' which is inserted into the final result along with the item for which the loop ran. ''' results = [] # make copies of the job vars and task so we can add the item to # the variables and re-validate the task with the item variable # task_vars = self._job_vars.copy() task_vars = self._job_vars loop_var = 'item' index_var = None label = None loop_pause = 0 extended = False templar = Templar(loader=self._loader, variables=self._job_vars) # FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate) if self._task.loop_control: loop_var = templar.template(self._task.loop_control.loop_var) index_var = templar.template(self._task.loop_control.index_var) loop_pause = templar.template(self._task.loop_control.pause) extended = templar.template(self._task.loop_control.extended) # This may be 'None',so it is templated below after we ensure a value and an item is assigned label = self._task.loop_control.label # ensure we always have a label if label is None: label = '{{' + loop_var + '}}' if loop_var in task_vars: display.warning(u"The loop variable '%s' is already in use. " u"You should set the `loop_var` value in the `loop_control` option for the task" u" to something else to avoid variable collisions and unexpected behavior." % loop_var) ran_once = False no_log = False items_len = len(items) for item_index, item in enumerate(items): task_vars['ansible_loop_var'] = loop_var task_vars[loop_var] = item if index_var: task_vars['ansible_index_var'] = index_var task_vars[index_var] = item_index if extended: task_vars['ansible_loop'] = { 'allitems': items, 'index': item_index + 1, 'index0': item_index, 'first': item_index == 0, 'last': item_index + 1 == items_len, 'length': items_len, 'revindex': items_len - item_index, 'revindex0': items_len - item_index - 1, } try: task_vars['ansible_loop']['nextitem'] = items[item_index + 1] except IndexError: pass if item_index - 1 >= 0: task_vars['ansible_loop']['previtem'] = items[item_index - 1] # Update template vars to reflect current loop iteration templar.available_variables = task_vars # pause between loop iterations if loop_pause and ran_once: try: time.sleep(float(loop_pause)) except ValueError as e: raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e))) else: ran_once = True try: tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True) tmp_task._parent = self._task._parent tmp_play_context = self._play_context.copy() except AnsibleParserError as e: results.append(dict(failed=True, msg=to_text(e))) continue # now we swap the internal task and play context with their copies, # execute, and swap them back so we can do the next iteration cleanly (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) res = self._execute(variables=task_vars) task_fields = self._task.dump_attrs() (self._task, tmp_task) = (tmp_task, self._task) (self._play_context, tmp_play_context) = (tmp_play_context, self._play_context) # update 'general no_log' based on specific no_log no_log = no_log or tmp_task.no_log # now update the result with the item info, and append the result # to the list of results res[loop_var] = item res['ansible_loop_var'] = loop_var if index_var: res[index_var] = item_index res['ansible_index_var'] = index_var if extended: res['ansible_loop'] = task_vars['ansible_loop'] res['_ansible_item_result'] = True res['_ansible_ignore_errors'] = task_fields.get('ignore_errors') # gets templated here unlike rest of loop_control fields, depends on loop_var above try: res['_ansible_item_label'] = templar.template(label, cache=False) except AnsibleUndefinedVariable as e: res.update({ 'failed': True, 'msg': 'Failed to template loop_control.label: %s' % to_text(e) }) tr = TaskResult( self._host.name, self._task._uuid, res, task_fields=task_fields, ) if tr.is_failed() or tr.is_unreachable(): self._final_q.send_callback('v2_runner_item_on_failed', tr) elif tr.is_skipped(): self._final_q.send_callback('v2_runner_item_on_skipped', tr) else: if getattr(self._task, 'diff', False): self._final_q.send_callback('v2_on_file_diff', tr) self._final_q.send_callback('v2_runner_item_on_ok', tr) results.append(res) del task_vars[loop_var] # clear 'connection related' plugin variables for next iteration if self._connection: clear_plugins = { 'connection': self._connection._load_name, 'shell': self._connection._shell._load_name } if self._connection.become: clear_plugins['become'] = self._connection.become._load_name for plugin_type, plugin_name in iteritems(clear_plugins): for var in C.config.get_plugin_vars(plugin_type, plugin_name): if var in task_vars and var not in self._job_vars: del task_vars[var] self._task.no_log = no_log return results def _execute(self, variables=None): ''' The primary workhorse of the executor system, this runs the task on the specified host (which may be the delegated_to host) and handles the retry/until and block rescue/always execution ''' if variables is None: variables = self._job_vars templar = Templar(loader=self._loader, variables=variables) context_validation_error = None try: # TODO: remove play_context as this does not take delegation into account, task itself should hold values # for connection/shell/become/terminal plugin options to finalize. # Kept for now for backwards compatibility and a few functions that are still exclusive to it. # apply the given task's information to the connection info, # which may override some fields already set by the play or # the options specified on the command line self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar) # fields set from the play/task may be based on variables, so we have to # do the same kind of post validation step on it here before we use it. self._play_context.post_validate(templar=templar) # now that the play context is finalized, if the remote_addr is not set # default to using the host's address field as the remote address if not self._play_context.remote_addr: self._play_context.remote_addr = self._host.address # We also add "magic" variables back into the variables dict to make sure # a certain subset of variables exist. self._play_context.update_vars(variables) except AnsibleError as e: # save the error, which we'll raise later if we don't end up # skipping this task during the conditional evaluation step context_validation_error = e # Evaluate the conditional (if any) for this task, which we do before running # the final task post-validation. We do this before the post validation due to # the fact that the conditional may specify that the task be skipped due to a # variable not being present which would otherwise cause validation to fail try: if not self._task.evaluate_conditional(templar, variables): display.debug("when evaluation is False, skipping this task") return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log) except AnsibleError as e: # loop error takes precedence if self._loop_eval_error is not None: # Display the error from the conditional as well to prevent # losing information useful for debugging. display.v(to_text(e)) raise self._loop_eval_error # pylint: disable=raising-bad-type raise # Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task if self._loop_eval_error is not None: raise self._loop_eval_error # pylint: disable=raising-bad-type # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None: raise context_validation_error # pylint: disable=raising-bad-type # if this task is a TaskInclude, we just return now with a success code so the # main thread can expand the task list for the given host if self._task.action in C._ACTION_ALL_INCLUDE_TASKS: include_args = self._task.args.copy() include_file = include_args.pop('_raw_params', None) if not include_file: return dict(failed=True, msg="No include file was specified to the include") include_file = templar.template(include_file) return dict(include=include_file, include_args=include_args) # if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host elif self._task.action in C._ACTION_INCLUDE_ROLE: include_args = self._task.args.copy() return dict(include_args=include_args) # Now we do final validation on the task, which sets all fields to their final values. try: self._task.post_validate(templar=templar) except AnsibleError: raise except Exception: return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc())) if '_variable_params' in self._task.args: variable_params = self._task.args.pop('_variable_params') if isinstance(variable_params, dict): if C.INJECT_FACTS_AS_VARS: display.warning("Using a variable for a task's 'args' is unsafe in some situations " "(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)") variable_params.update(self._task.args) self._task.args = variable_params if self._task.delegate_to: # use vars from delegated host (which already include task vars) instead of original host cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {}) orig_vars = templar.available_variables else: # just use normal host vars cvars = orig_vars = variables templar.available_variables = cvars # get the connection and the handler for this execution if (not self._connection or not getattr(self._connection, 'connected', False) or self._play_context.remote_addr != self._connection._play_context.remote_addr): self._connection = self._get_connection(cvars, templar) else: # if connection is reused, its _play_context is no longer valid and needs # to be replaced with the one templated above, in case other data changed self._connection._play_context = self._play_context plugin_vars = self._set_connection_options(cvars, templar) templar.available_variables = orig_vars # TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules # special handling for python interpreter for network_os, default to ansible python unless overriden if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars: # this also avoids 'python discovery' cvars['ansible_python_interpreter'] = sys.executable # get handler self._handler = self._get_action_handler(connection=self._connection, templar=templar) # Apply default params for action/module, if present self._task.args = get_action_args_with_defaults( self._task.action, self._task.args, self._task.module_defaults, templar, self._task._ansible_internal_redirect_list ) # And filter out any fields which were set to default(omit), and got the omit token value omit_token = variables.get('omit') if omit_token is not None: self._task.args = remove_omit(self._task.args, omit_token) # Read some values from the task, so that we can modify them if need be if self._task.until: retries = self._task.retries if retries is None: retries = 3 elif retries <= 0: retries = 1 else: retries += 1 else: retries = 1 delay = self._task.delay if delay < 0: delay = 1 # make a copy of the job vars here, in case we need to update them # with the registered variable value later on when testing conditions vars_copy = variables.copy() display.debug("starting attempt loop") result = None for attempt in xrange(1, retries + 1): display.debug("running the handler") try: if self._task.timeout: old_sig = signal.signal(signal.SIGALRM, task_timeout) signal.alarm(self._task.timeout) result = self._handler.run(task_vars=variables) except AnsibleActionSkip as e: return dict(skipped=True, msg=to_text(e)) except AnsibleActionFail as e: return dict(failed=True, msg=to_text(e)) except AnsibleConnectionFailure as e: return dict(unreachable=True, msg=to_text(e)) except TaskTimeoutError as e: msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout) return dict(failed=True, msg=msg) finally: if self._task.timeout: signal.alarm(0) old_sig = signal.signal(signal.SIGALRM, old_sig) self._handler.cleanup() display.debug("handler run complete") # preserve no log result["_ansible_no_log"] = self._play_context.no_log # update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution if self._task.register: if not isidentifier(self._task.register): raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register) vars_copy[self._task.register] = result = wrap_var(result) if self._task.async_val > 0: if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'): result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy) # ensure no log is preserved result["_ansible_no_log"] = self._play_context.no_log # helper methods for use below in evaluating changed/failed_when def _evaluate_changed_when_result(result): if self._task.changed_when is not None and self._task.changed_when: cond = Conditional(loader=self._loader) cond.when = self._task.changed_when result['changed'] = cond.evaluate_conditional(templar, vars_copy) def _evaluate_failed_when_result(result): if self._task.failed_when: cond = Conditional(loader=self._loader) cond.when = self._task.failed_when failed_when_result = cond.evaluate_conditional(templar, vars_copy) result['failed_when_result'] = result['failed'] = failed_when_result else: failed_when_result = False return failed_when_result if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: vars_copy.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: vars_copy.update(clean_facts(af)) # set the failed property if it was missing. if 'failed' not in result: # rc is here for backwards compatibility and modules that use it instead of 'failed' if 'rc' in result and result['rc'] not in [0, "0"]: result['failed'] = True else: result['failed'] = False # Make attempts and retries available early to allow their use in changed/failed_when if self._task.until: result['attempts'] = attempt # set the changed property if it was missing. if 'changed' not in result: result['changed'] = False # re-update the local copy of vars with the registered value, if specified, # or any facts which may have been generated by the module execution # This gives changed/failed_when access to additional recently modified # attributes of result if self._task.register: vars_copy[self._task.register] = result = wrap_var(result) # if we didn't skip this task, use the helpers to evaluate the changed/ # failed_when properties if 'skipped' not in result: _evaluate_changed_when_result(result) _evaluate_failed_when_result(result) if retries > 1: cond = Conditional(loader=self._loader) cond.when = self._task.until if cond.evaluate_conditional(templar, vars_copy): break else: # no conditional check, or it failed, so sleep for the specified time if attempt < retries: result['_ansible_retry'] = True result['retries'] = retries display.debug('Retrying task, attempt %d of %d' % (attempt, retries)) self._final_q.send_callback( 'v2_runner_retry', TaskResult( self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs() ) ) time.sleep(delay) self._handler = self._get_action_handler(connection=self._connection, templar=templar) else: if retries > 1: # we ran out of attempts, so mark the result as failed result['attempts'] = retries - 1 result['failed'] = True # do the final update of the local variables here, for both registered # values and any facts which may have been created if self._task.register: variables[self._task.register] = result = wrap_var(result) if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG: if self._task.action in C._ACTION_WITH_CLEAN_FACTS: variables.update(result['ansible_facts']) else: # TODO: cleaning of facts should eventually become part of taskresults instead of vars af = wrap_var(result['ansible_facts']) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) if C.INJECT_FACTS_AS_VARS: variables.update(clean_facts(af)) # save the notification target in the result, if it was specified, as # this task may be running in a loop in which case the notification # may be item-specific, ie. "notify: service {{item}}" if self._task.notify is not None: result['_ansible_notify'] = self._task.notify # add the delegated vars to the result, so we can reference them # on the results side without having to do any further templating # also now add conneciton vars results when delegating if self._task.delegate_to: result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to} for k in plugin_vars: result["_ansible_delegated_vars"][k] = cvars.get(k) # note: here for callbacks that rely on this info to display delegation for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'): if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars: result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed) # and return display.debug("attempt loop complete, returning result") return result def _poll_async_result(self, result, templar, task_vars=None): ''' Polls for the specified JID to be complete ''' if task_vars is None: task_vars = self._job_vars async_jid = result.get('ansible_job_id') if async_jid is None: return dict(failed=True, msg="No job id was returned by the async task") # Create a new pseudo-task to run the async_status module, and run # that (with a sleep for "poll" seconds between each retry) until the # async time limit is exceeded. async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment)) # FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized # Because this is an async task, the action handler is async. However, # we need the 'normal' action handler for the status check, so get it # now via the action_loader async_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=async_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) time_left = self._task.async_val while time_left > 0: time.sleep(self._task.poll) try: async_result = async_handler.run(task_vars=task_vars) # We do not bail out of the loop in cases where the failure # is associated with a parsing error. The async_runner can # have issues which result in a half-written/unparseable result # file on disk, which manifests to the user as a timeout happening # before it's time to timeout. if (int(async_result.get('finished', 0)) == 1 or ('failed' in async_result and async_result.get('_ansible_parsed', False)) or 'skipped' in async_result): break except Exception as e: # Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal. # On an exception, call the connection's reset method if it has one # (eg, drop/recreate WinRM connection; some reused connections are in a broken state) display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e)) display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc())) try: async_handler._connection.reset() except AttributeError: pass # Little hack to raise the exception if we've exhausted the timeout period time_left -= self._task.poll if time_left <= 0: raise else: time_left -= self._task.poll self._final_q.send_callback( 'v2_runner_on_async_poll', TaskResult( self._host.name, async_task, # We send the full task here, because the controller knows nothing about it, the TE created it async_result, task_fields=self._task.dump_attrs(), ), ) if int(async_result.get('finished', 0)) != 1: if async_result.get('_ansible_parsed'): return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val) else: return dict(failed=True, msg="async task produced unparseable results", async_result=async_result) else: # If the async task finished, automatically cleanup the temporary # status file left behind. cleanup_task = Task().load( { 'async_status': { 'jid': async_jid, 'mode': 'cleanup', }, 'environment': self._task.environment, } ) cleanup_handler = self._shared_loader_obj.action_loader.get( 'ansible.legacy.async_status', task=cleanup_task, connection=self._connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, ) cleanup_handler.run(task_vars=task_vars) cleanup_handler.cleanup(force=True) async_handler.cleanup(force=True) return async_result def _get_become(self, name): become = become_loader.get(name) if not become: raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. " "Use `ansible-doc -t become -l` to list available plugins." % name) return become def _get_connection(self, cvars, templar): ''' Reads the connection property for the host, and returns the correct connection object from the list of connection plugins ''' # use magic var if it exists, if not, let task inheritance do it's thing. if cvars.get('ansible_connection') is not None: self._play_context.connection = templar.template(cvars['ansible_connection']) else: self._play_context.connection = self._task.connection # TODO: play context has logic to update the connection for 'smart' # (default value, will chose between ssh and paramiko) and 'persistent' # (really paramiko), eventually this should move to task object itself. connection_name = self._play_context.connection # load connection conn_type = connection_name connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( conn_type, self._play_context, self._new_stdin, task_uuid=self._task._uuid, ansible_playbook_pid=to_text(os.getppid()) ) if not connection: raise AnsibleError("the connection plugin '%s' was not found" % conn_type) # load become plugin if needed if cvars.get('ansible_become') is not None: become = boolean(templar.template(cvars['ansible_become'])) else: become = self._task.become if become: if cvars.get('ansible_become_method'): become_plugin = self._get_become(templar.template(cvars['ansible_become_method'])) else: become_plugin = self._get_become(self._task.become_method) try: connection.set_become_plugin(become_plugin) except AttributeError: # Older connection plugin that does not support set_become_plugin pass if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False): raise AnsibleError( "The '%s' connection does not provide a TTY which is required for the selected " "become plugin: %s." % (conn_type, become_plugin.name) ) # Backwards compat for connection plugins that don't support become plugins # Just do this unconditionally for now, we could move it inside of the # AttributeError above later self._play_context.set_become_plugin(become_plugin.name) # Also backwards compat call for those still using play_context self._play_context.set_attributes_from_plugin(connection) if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)): self._play_context.timeout = connection.get_option('persistent_command_timeout') display.vvvv('attempting to start connection', host=self._play_context.remote_addr) display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr) options = self._get_persistent_connection_options(connection, cvars, templar) socket_path = start_connection(self._play_context, options, self._task._uuid) display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr) setattr(connection, '_socket_path', socket_path) return connection def _get_persistent_connection_options(self, connection, final_vars, templar): option_vars = C.config.get_plugin_vars('connection', connection._load_name) plugin = connection._sub_plugin if plugin.get('type'): option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name'])) options = {} for k in option_vars: if k in final_vars: options[k] = templar.template(final_vars[k]) return options def _set_plugin_options(self, plugin_type, variables, templar, task_keys): try: plugin = getattr(self._connection, '_%s' % plugin_type) except AttributeError: # Some plugins are assigned to private attrs, ``become`` is not plugin = getattr(self._connection, plugin_type) option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name) options = {} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # TODO move to task method? plugin.set_options(task_keys=task_keys, var_options=options) return option_vars def _set_connection_options(self, variables, templar): # keep list of variable names possibly consumed varnames = [] # grab list of usable vars for this plugin option_vars = C.config.get_plugin_vars('connection', self._connection._load_name) varnames.extend(option_vars) # create dict of 'templated vars' options = {'_extras': {}} for k in option_vars: if k in variables: options[k] = templar.template(variables[k]) # add extras if plugin supports them if getattr(self._connection, 'allow_extras', False): for k in variables: if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options: options['_extras'][k] = templar.template(variables[k]) task_keys = self._task.dump_attrs() # The task_keys 'timeout' attr is the task's timeout, not the connection timeout. # The connection timeout is threaded through the play_context for now. task_keys['timeout'] = self._play_context.timeout if self._play_context.password: # The connection password is threaded through the play_context for # now. This is something we ultimately want to avoid, but the first # step is to get connection plugins pulling the password through the # config system instead of directly accessing play_context. task_keys['password'] = self._play_context.password # set options with 'templated vars' specific to this plugin and dependent ones self._connection.set_options(task_keys=task_keys, var_options=options) varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys)) if self._connection.become is not None: if self._play_context.become_pass: # FIXME: eventually remove from task and play_context, here for backwards compat # keep out of play objects to avoid accidental disclosure, only become plugin should have # The become pass is already in the play_context if given on # the CLI (-K). Make the plugin aware of it in this case. task_keys['become_pass'] = self._play_context.become_pass varnames.extend(self._set_plugin_options('become', variables, templar, task_keys)) # FOR BACKWARDS COMPAT: for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'): try: setattr(self._play_context, option, self._connection.become.get_option(option)) except KeyError: pass # some plugins don't support all base flags self._play_context.prompt = self._connection.become.prompt return varnames def _get_action_handler(self, connection, templar): ''' Returns the correct action plugin to handle the requestion task action ''' module_collection, separator, module_name = self._task.action.rpartition(".") module_prefix = module_name.split('_')[0] if module_collection: # For network modules, which look for one action plugin per platform, look for the # action plugin in the same collection as the module by prefixing the action plugin # with the same collection. network_action = "{0}.{1}".format(module_collection, module_prefix) else: network_action = module_prefix collections = self._task.collections # let action plugin override module, fallback to 'normal' action plugin otherwise if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections): handler_name = self._task.action elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))): handler_name = network_action display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name, action=self._task.action), host=self._play_context.remote_addr) else: # use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search handler_name = 'ansible.legacy.normal' collections = None # until then, we don't want the task's collection list to be consulted; use the builtin handler = self._shared_loader_obj.action_loader.get( handler_name, task=self._task, connection=connection, play_context=self._play_context, loader=self._loader, templar=templar, shared_loader_obj=self._shared_loader_obj, collection_list=collections ) if not handler: raise AnsibleError("the handler '%s' was not found" % handler_name) return handler def start_connection(play_context, variables, task_uuid): ''' Starts the persistent connection ''' candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])] candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep)) for dirname in candidate_paths: ansible_connection = os.path.join(dirname, 'ansible-connection') if os.path.isfile(ansible_connection): display.vvvv("Found ansible-connection at path {0}".format(ansible_connection)) break else: raise AnsibleError("Unable to find location of 'ansible-connection'. " "Please set or check the value of ANSIBLE_CONNECTION_PATH") env = os.environ.copy() env.update({ # HACK; most of these paths may change during the controller's lifetime # (eg, due to late dynamic role includes, multi-playbook execution), without a way # to invalidate/update, ansible-connection won't always see the same plugins the controller # can. 'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(), 'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(), 'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)), 'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(), 'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(), 'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(), 'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(), }) python = sys.executable master, slave = pty.openpty() p = subprocess.Popen( [python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)], stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env ) os.close(slave) # We need to set the pty into noncanonical mode. This ensures that we # can receive lines longer than 4095 characters (plus newline) without # truncating. old = termios.tcgetattr(master) new = termios.tcgetattr(master) new[3] = new[3] & ~termios.ICANON try: termios.tcsetattr(master, termios.TCSANOW, new) write_to_file_descriptor(master, variables) write_to_file_descriptor(master, play_context.serialize()) (stdout, stderr) = p.communicate() finally: termios.tcsetattr(master, termios.TCSANOW, old) os.close(master) if p.returncode == 0: result = json.loads(to_text(stdout, errors='surrogate_then_replace')) else: try: result = json.loads(to_text(stderr, errors='surrogate_then_replace')) except getattr(json.decoder, 'JSONDecodeError', ValueError): # JSONDecodeError only available on Python 3.5+ result = {'error': to_text(stderr, errors='surrogate_then_replace')} if 'messages' in result: for level, message in result['messages']: if level == 'log': display.display(message, log_only=True) elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'): getattr(display, level)(message, host=play_context.remote_addr) else: if hasattr(display, level): getattr(display, level)(message) else: display.vvvv(message, host=play_context.remote_addr) if 'error' in result: if play_context.verbosity > 2: if result.get('exception'): msg = "The full traceback is:\n" + result['exception'] display.display(msg, color=C.COLOR_ERROR) raise AnsibleError(result['error']) return result['socket_path']
closed
ansible/ansible
https://github.com/ansible/ansible
74,683
Vault variables undefined when using delegate_to (reopen)
### Summary This is a reopening of the issue https://github.com/ansible/ansible/issues/22737 which was said to be resolved by https://github.com/ansible/ansible/pull/70331. I reviewed the code and it appears that an attempt was made to resolve it but an error is being raised just before the new logic to use the delegated vars in the templar. When I ignore that error (by hacking task_executor.py), everything works as expected. The task_executor.py:_execute method is setting context_validation_error when working with the play_context near the beginning of the method. If I simply change the line: ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None: raise context_validation_error # pylint: disable=raising-bad-type ``` to ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None and not self._task.delegate_to: raise context_validation_error # pylint: disable=raising-bad-type ``` Then the templating with the delegated vars works as expected (I can have ansible_password: "{{ vault_ansible_password }}"). I'm not sure if this is the final solution you want, but it looks like an easy fix. Note that the same fix does not work in Ansible 2.9. Is there any plan to backport 70331 to 2.9 (along with this fix?). Thanks ### Issue Type Bug Report ### Component Name task_executor.py ### Ansible Version ```console $ ansible --version ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.8.3 (default, Aug 18 2020, 08:56:04) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <empty> ``` ### OS / Environment RHEL 8.2 ### Steps to Reproduce See 70331 ### Expected Results Ability to reference other variables inside connection variables. ### Actual Results ```console Undefined error. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74683
https://github.com/ansible/ansible/pull/74685
015331518dff60f31a7d8ce24fc315e3ac9e86f8
b518aabf81213dd4d8b5b46a1a0657b5d8408238
2021-05-13T04:43:05Z
python
2021-05-24T14:13:19Z
test/integration/targets/delegate_to/delegate_with_fact_from_delegate_host.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,683
Vault variables undefined when using delegate_to (reopen)
### Summary This is a reopening of the issue https://github.com/ansible/ansible/issues/22737 which was said to be resolved by https://github.com/ansible/ansible/pull/70331. I reviewed the code and it appears that an attempt was made to resolve it but an error is being raised just before the new logic to use the delegated vars in the templar. When I ignore that error (by hacking task_executor.py), everything works as expected. The task_executor.py:_execute method is setting context_validation_error when working with the play_context near the beginning of the method. If I simply change the line: ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None: raise context_validation_error # pylint: disable=raising-bad-type ``` to ``` # if we ran into an error while setting up the PlayContext, raise it now if context_validation_error is not None and not self._task.delegate_to: raise context_validation_error # pylint: disable=raising-bad-type ``` Then the templating with the delegated vars works as expected (I can have ansible_password: "{{ vault_ansible_password }}"). I'm not sure if this is the final solution you want, but it looks like an easy fix. Note that the same fix does not work in Ansible 2.9. Is there any plan to backport 70331 to 2.9 (along with this fix?). Thanks ### Issue Type Bug Report ### Component Name task_executor.py ### Ansible Version ```console $ ansible --version ansible [core 2.11.0] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.8.3 (default, Aug 18 2020, 08:56:04) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] jinja version = 2.11.3 libyaml = True ``` ### Configuration ```console $ ansible-config dump --only-changed <empty> ``` ### OS / Environment RHEL 8.2 ### Steps to Reproduce See 70331 ### Expected Results Ability to reference other variables inside connection variables. ### Actual Results ```console Undefined error. ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74683
https://github.com/ansible/ansible/pull/74685
015331518dff60f31a7d8ce24fc315e3ac9e86f8
b518aabf81213dd4d8b5b46a1a0657b5d8408238
2021-05-13T04:43:05Z
python
2021-05-24T14:13:19Z
test/integration/targets/delegate_to/runme.sh
#!/usr/bin/env bash set -eux platform="$(uname)" function setup() { if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then ifconfig lo0 existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true) echo "${existing}" for i in 3 4 254; do ip="127.0.0.${i}" if [[ "${existing}" != *"${ip}"* ]]; then ifconfig lo0 alias "${ip}" up fi done ifconfig lo0 fi } function teardown() { if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then for i in 3 4 254; do ip="127.0.0.${i}" if [[ "${existing}" != *"${ip}"* ]]; then ifconfig lo0 -alias "${ip}" fi done ifconfig lo0 fi } setup trap teardown EXIT ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@" # this test is not doing what it says it does, also relies on var that should not be available #ansible-playbook test_loop_control.yml -v "$@" ansible-playbook test_delegate_to_loop_randomness.yml -v "$@" ansible-playbook delegate_and_nolog.yml -i inventory -v "$@" ansible-playbook delegate_facts_block.yml -i inventory -v "$@" ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@" # ensure we are using correct settings when delegating ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@" ansible-playbook has_hostvars.yml -i inventory -v "$@" # test ansible_x_interpreter # python source virtualenv.sh ( cd "${OUTPUT_DIR}"/venv/bin ln -s python firstpython ln -s python secondpython ) ansible-playbook verify_interpreter.yml -i inventory_interpreters -v "$@" ansible-playbook discovery_applied.yml -i inventory -v "$@" ansible-playbook resolve_vars.yml -i inventory -v "$@" ansible-playbook test_delegate_to_lookup_context.yml -i inventory -v "$@" ansible-playbook delegate_local_from_root.yml -i inventory -v "$@" -e 'ansible_user=root'
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
changelogs/fragments/blowfish_ident.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
docs/docsite/rst/user_guide/playbooks_filters.rst
.. _playbooks_filters: ******************************** Using filters to manipulate data ******************************** Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them. Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally. .. contents:: :local: Handling undefined variables ============================ Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter. .. _defaulting_undefined_variables: Providing default values ------------------------ You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:: {{ some_variable | default(5) }} In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role. Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined. If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:: {{ lookup('env', 'MY_USER') | default('admin', true) }} .. _omitting_undefined_variables: Making variables optional ------------------------- By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:: - name: Touch files with an optional mode ansible.builtin.file: dest: "{{ item.path }}" state: touch mode: "{{ item.mode | default(omit) }}" loop: - path: /tmp/foo - path: /tmp/bar - path: /tmp/baz mode: "0444" In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option. .. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this: ``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this. .. _forcing_variables_to_be_defined: Defining mandatory values ------------------------- If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:: {{ variable | mandatory }} The variable value will be used as is, but the template evaluation will raise an error if it is undefined. Defining different values for true/false/null (ternary) ======================================================= You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):: {{ (status == 'needs_restart') | ternary('restart', 'continue') }} In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):: {{ enabled | ternary('no shutdown', 'shutdown', omit) }} Managing data types =================== You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type. Discovering the data type ------------------------- .. versionadded:: 2.3 If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:: {{ myvar | type_debug }} .. _dict_filter: Transforming dictionaries into lists ------------------------------------ .. versionadded:: 2.6 Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:: {{ dict | dict2items }} Dictionary data (before applying the ``dict2items`` filter):: tags: Application: payment Environment: dev List data (after applying the ``dict2items`` filter):: - key: Application value: payment - key: Environment value: dev .. versionadded:: 2.8 The ``dict2items`` filter is the reverse of the ``items2dict`` filter. If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:: {{ files | dict2items(key_name='file', value_name='path') }} Dictionary data (before applying the ``dict2items`` filter):: files: users: /etc/passwd groups: /etc/group List data (after applying the ``dict2items`` filter):: - file: users path: /etc/passwd - file: groups path: /etc/group Transforming lists into dictionaries ------------------------------------ .. versionadded:: 2.7 Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:: {{ tags | items2dict }} List data (before applying the ``items2dict`` filter):: tags: - key: Application value: payment - key: Environment value: dev Dictionary data (after applying the ``items2dict`` filter):: Application: payment Environment: dev The ``items2dict`` filter is the reverse of the ``dict2items`` filter. Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:: fruits: - fruit: apple color: red - fruit: pear color: yellow - fruit: grapefruit color: yellow In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:: {{ tags | items2dict(key_name='fruit', value_name='color') }} If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``. Forcing the data type --------------------- You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:: - debug: msg: test when: some_string_value | bool If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:: - shell: echo "only on Red Hat 6, derivatives, and later" when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6 .. versionadded:: 1.6 .. _filters_for_formatting_data: Formatting data: YAML and JSON ============================== You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:: {{ some_variable | to_json }} {{ some_variable | to_yaml }} For human readable output, you can use:: {{ some_variable | to_nice_json }} {{ some_variable | to_nice_yaml }} You can change the indentation of either format:: {{ some_variable | to_nice_json(indent=2) }} {{ some_variable | to_nice_yaml(indent=8) }} The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol) To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:: {{ some_variable | to_yaml(indent=8, width=1337) }} {{ some_variable | to_nice_yaml(indent=8, width=1337) }} The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_. If you are reading in some already formatted data:: {{ some_variable | from_json }} {{ some_variable | from_yaml }} for example:: tasks: - name: Register JSON output as a variable ansible.builtin.shell: cat /some/path/to/file.json register: result - name: Set a variable ansible.builtin.set_fact: myvar: "{{ result.stdout | from_json }}" Filter `to_json` and Unicode support ------------------------------------ By default `to_json` and `to_nice_json` will convert data received to ASCII, so:: {{ 'München'| to_json }} will return:: 'M\u00fcnchen' To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:: {{ 'München'| to_json(ensure_ascii=False) }} 'München' .. versionadded:: 2.7 To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided. The ``from_yaml_all`` filter will return a generator of parsed YAML documents. for example:: tasks: - name: Register a file content as a variable ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml register: result - name: Print the transformed variable ansible.builtin.debug: msg: '{{ item }}' loop: '{{ result.stdout | from_yaml_all | list }}' Combining and selecting data ============================ You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data. .. _zip_filter: Combining items from multiple lists: zip and zip_longest -------------------------------------------------------- .. versionadded:: 2.3 To get a list combining the elements of other lists use ``zip``:: - name: Give me list combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}" # => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]] - name: Give me shortest combo of two lists ansible.builtin.debug: msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}" # => [[1, "a"], [2, "b"], [3, "c"]] To always exhaust all lists use ``zip_longest``:: - name: Give me longest combo of three lists , fill with X ansible.builtin.debug: msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}" # => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]] Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:: {{ dict(keys_list | zip(values_list)) }} List data (before applying the ``zip`` filter):: keys_list: - one - two values_list: - apple - orange Dictionary data (after applying the ``zip`` filter):: one: apple two: orange Combining objects and subelements --------------------------------- .. versionadded:: 2.7 The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:: {{ users | subelements('groups', skip_missing=True) }} Data before applying the ``subelements`` filter:: users: - name: alice authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub groups: - wheel - docker - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker Data after applying the ``subelements`` filter:: - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - wheel - - name: alice groups: - wheel - docker authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - docker - - name: bob authorized: - /tmp/bob/id_rsa.pub groups: - docker - docker You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:: - name: Set authorized ssh key, extracting just that data from 'users' ansible.posix.authorized_key: user: "{{ item.0.name }}" key: "{{ lookup('file', item.1) }}" loop: "{{ users | subelements('authorized') }}" .. _combine_filter: Combining hashes/dictionaries ----------------------------- .. versionadded:: 2.0 The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:: {{ {'a':1, 'b':2} | combine({'b':3}) }} The resulting hash would be:: {'a':1, 'b':3} The filter can also take multiple arguments to merge:: {{ a | combine(b, c, d) }} {{ [a, b, c, d] | combine }} In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on. The filter also accepts two optional parameters: ``recursive`` and ``list_merge``. recursive Is a boolean, default to ``False``. Should the ``combine`` recursively merge nested hashes. Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``. list_merge Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``. It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists. .. code-block:: yaml default: a: x: default y: default b: default c: default patch: a: y: patch z: patch b: patch If ``recursive=False`` (the default), nested hash aren't merged:: {{ default | combine(patch) }} This would result in:: a: y: patch z: patch b: patch c: default If ``recursive=True``, recurse into nested hash and merge their keys:: {{ default | combine(patch, recursive=True) }} This would result in:: a: x: default y: patch z: patch b: patch c: default If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:: default: a: - default patch: a: - patch .. code-block:: jinja {{ default | combine(patch) }} This would result in:: a: - patch If ``list_merge='keep'``, arrays from the left hash will be kept:: {{ default | combine(patch, list_merge='keep') }} This would result in:: a: - default If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:: {{ default | combine(patch, list_merge='append') }} This would result in:: a: - default - patch If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:: {{ default | combine(patch, list_merge='prepend') }} This would result in:: a: - patch - default If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:: default: a: - 1 - 1 - 2 - 3 patch: a: - 3 - 4 - 5 - 5 .. code-block:: jinja {{ default | combine(patch, list_merge='append_rp') }} This would result in:: a: - 1 - 1 - 2 - 3 - 4 - 5 - 5 If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:: {{ default | combine(patch, list_merge='prepend_rp') }} This would result in:: a: - 3 - 4 - 5 - 5 - 1 - 1 - 2 ``recursive`` and ``list_merge`` can be used together:: default: a: a': x: default_value y: default_value list: - default_value b: - 1 - 1 - 2 - 3 patch: a: a': y: patch_value z: patch_value list: - patch_value b: - 3 - 4 - 4 - key: value .. code-block:: jinja {{ default | combine(patch, recursive=True, list_merge='append_rp') }} This would result in:: a: a': x: default_value y: patch_value z: patch_value list: - default_value - patch_value b: - 1 - 1 - 2 - 3 - 4 - 4 - key: value .. _extract_filter: Selecting values from arrays or hashtables ------------------------------------------- .. versionadded:: 2.1 The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):: {{ [0,2] | map('extract', ['x','y','z']) | list }} {{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }} The results of the above expressions would be:: ['x', 'z'] [42, 31] The filter can take another argument:: {{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }} This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'. The third argument to the filter can also be a list, for a recursive lookup inside the container:: {{ ['a'] | map('extract', b, ['x','y']) | list }} This would return a list containing the value of `b['a']['x']['y']`. Combining lists --------------- This set of filters returns a list of combined lists. permutations ^^^^^^^^^^^^ To get permutations of a list:: - name: Give me largest permutations (order matters) ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}" - name: Give me permutations of sets of three ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}" combinations ^^^^^^^^^^^^ Combinations always require a set size:: - name: Give me combinations for sets of two ansible.builtin.debug: msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}" Also see the :ref:`zip_filter` products ^^^^^^^^ The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression. For example:: - name: Generate multiple hostnames ansible.builtin.debug: msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}" This would result in:: { "msg": "foo.com,bar.com" } .. json_query_filter: Selecting JSON data: JSON queries --------------------------------- To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure. .. note:: This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection. .. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_. Consider this data structure:: { "domain_definition": { "domain": { "cluster": [ { "name": "cluster1" }, { "name": "cluster2" } ], "server": [ { "name": "server11", "cluster": "cluster1", "port": "8080" }, { "name": "server12", "cluster": "cluster1", "port": "8090" }, { "name": "server21", "cluster": "cluster2", "port": "9080" }, { "name": "server22", "cluster": "cluster2", "port": "9090" } ], "library": [ { "name": "lib1", "target": "cluster1" }, { "name": "lib2", "target": "cluster2" } ] } } } To extract all clusters from this structure, you can use the following query:: - name: Display all cluster names ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}" To extract all server names:: - name: Display all server names ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}" To extract ports from cluster1:: - name: Display all ports from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port" .. note:: You can use a variable to make the query more readable. To print out the ports from cluster1 in a comma separated string:: - name: Display all ports from cluster1 as a string ansible.builtin.debug: msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}" .. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability. You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:: - name: Display all ports from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}" .. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote. To get a hash map with all ports and names of a cluster:: - name: Display all server ports and names from cluster1 ansible.builtin.debug: var: item loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}" vars: server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}" To extract ports from all clusters with name starting with 'server1':: - name: Display all ports from cluster1 ansible.builtin.debug: msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}" vars: server_name_query: "domain.server[?starts_with(name,'server1')].port" To extract ports from all clusters with name containing 'server1':: - name: Display all ports from cluster1 ansible.builtin.debug: msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}" vars: server_name_query: "domain.server[?contains(name,'server1')].port" .. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure. Randomizing data ================ When you need a randomly generated value, use one of these filters. .. _random_mac_filter: Random MAC addresses -------------------- .. versionadded:: 2.6 This filter can be used to generate a random MAC address from a string prefix. .. note:: This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection. To get a random MAC address from a string prefix starting with '52:54:00':: "{{ '52:54:00' | community.general.random_mac }}" # => '52:54:00:ef:1c:03' Note that if anything is wrong with the prefix string, the filter will issue an error. .. versionadded:: 2.9 As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:: "{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}" .. _random_filter: Random items or numbers ----------------------- The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range. To get a random item from a list:: "{{ ['a','b','c'] | random }}" # => 'c' To get a random number between 0 (inclusive) and a specified integer (exclusive):: "{{ 60 | random }} * * * * root /script/from/cron" # => '21 * * * * root /script/from/cron' To get a random number from 0 to 100 but in steps of 10:: {{ 101 | random(step=10) }} # => 70 To get a random number from 1 to 100 but in steps of 10:: {{ 101 | random(1, 10) }} # => 31 {{ 101 | random(start=1, step=10) }} # => 51 You can initialize the random number generator from a seed to create random-but-idempotent numbers:: "{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron" Shuffling a list ---------------- The ``shuffle`` filter randomizes an existing list, giving a different order every invocation. To get a random list from an existing list:: {{ ['a','b','c'] | shuffle }} # => ['c','a','b'] {{ ['a','b','c'] | shuffle }} # => ['b','c','a'] You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:: {{ ['a','b','c'] | shuffle(seed=inventory_hostname) }} # => ['b','a','c'] The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing. .. _list_filters: Managing list variables ======================= You can search for the minimum or maximum value in a list, or flatten a multi-level list. To get the minimum value from list of numbers:: {{ list1 | min }} .. versionadded:: 2.11 To get the minimum value in a list of objects:: {{ [{'val': 1}, {'val': 2}] | min(attribute='val') }} To get the maximum value from a list of numbers:: {{ [3, 4, 2] | max }} .. versionadded:: 2.11 To get the maximum value in a list of objects:: {{ [{'val': 1}, {'val': 2}] | max(attribute='val') }} .. versionadded:: 2.5 Flatten a list (same thing the `flatten` lookup does):: {{ [3, [4, 2] ] | flatten }} # => [3, 4, 2] Flatten only the first level of a list (akin to the `items` lookup):: {{ [3, [4, [2]] ] | flatten(levels=1) }} # => [3, 4, [2]] .. versionadded:: 2.11 Preserve nulls in a list, by default flatten removes them. :: {{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }} # => [3, None, 4, [2]] .. _set_theory_filters: Selecting from sets or lists (set theory) ========================================= You can select or combine items from sets or lists. .. versionadded:: 1.4 To get a unique set from a list:: # list1: [1, 2, 5, 1, 3, 4, 10] {{ list1 | unique }} # => [1, 2, 5, 3, 4, 10] To get a union of two lists:: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | union(list2) }} # => [1, 2, 5, 1, 3, 4, 10, 11, 99] To get the intersection of 2 lists (unique list of all items in both):: # list1: [1, 2, 5, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | intersect(list2) }} # => [1, 2, 5, 3, 4] To get the difference of 2 lists (items in 1 that don't exist in 2):: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | difference(list2) }} # => [10] To get the symmetric difference of 2 lists (items exclusive to each list):: # list1: [1, 2, 5, 1, 3, 4, 10] # list2: [1, 2, 3, 4, 5, 11, 99] {{ list1 | symmetric_difference(list2) }} # => [10, 11, 99] .. _math_stuff: Calculating numbers (math) ========================== .. versionadded:: 1.9 You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round(). Get the logarithm (default is e):: {{ 8 | log }} # => 2.0794415416798357 Get the base 10 logarithm:: {{ 8 | log(10) }} # => 0.9030899869919435 Give me the power of 2! (or 5):: {{ 8 | pow(5) }} # => 32768.0 Square root, or the 5th:: {{ 8 | root }} # => 2.8284271247461903 {{ 8 | root(5) }} # => 1.5157165665103982 Managing network interactions ============================= These filters help you with common network tasks. .. note:: These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection. .. _ipaddr_filter: IP address filters ------------------ .. versionadded:: 1.9 To test if a string is a valid IP address:: {{ myvar | ansible.netcommon.ipaddr }} You can also require a specific IP protocol version:: {{ myvar | ansible.netcommon.ipv4 }} {{ myvar | ansible.netcommon.ipv6 }} IP address filter can also be used to extract specific information from an IP address. For example, to get the IP address itself from a CIDR, you can use:: {{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }} # => 192.168.0.1 More information about ``ipaddr`` filter and complete usage guide can be found in :ref:`playbooks_filters_ipaddr`. .. _network_filters: Network CLI filters ------------------- .. versionadded:: 2.4 To convert the output of a network device CLI command into structured JSON output, use the ``parse_cli`` filter:: {{ output | ansible.netcommon.parse_cli('path/to/spec') }} The ``parse_cli`` filter will load the spec file and pass the command output through it, returning JSON output. The YAML spec file defines how to parse the CLI output. The spec file should be valid formatted YAML. It defines how to parse the CLI output and return JSON data. Below is an example of a valid spec file that will parse the output from the ``show vlan`` command. .. code-block:: yaml --- vars: vlan: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" enabled: "{{ item.state != 'act/lshut' }}" state: "{{ item.state }}" keys: vlans: value: "{{ vlan }}" items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)" state_static: value: present The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information. The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same ``show vlan`` command. .. code-block:: yaml --- vars: vlan: key: "{{ item.vlan_id }}" values: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" enabled: "{{ item.state != 'act/lshut' }}" state: "{{ item.state }}" keys: vlans: value: "{{ vlan }}" items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)" state_static: value: present Another common use case for parsing CLI commands is to break a large command into blocks that can be parsed. This can be done using the ``start_block`` and ``end_block`` directives to break the command into blocks that can be parsed. .. code-block:: yaml --- vars: interface: name: "{{ item[0].match[0] }}" state: "{{ item[1].state }}" mode: "{{ item[2].match[0] }}" keys: interfaces: value: "{{ interface }}" start_block: "^Ethernet.*$" end_block: "^$" items: - "^(?P<name>Ethernet\\d\\/\\d*)" - "admin state is (?P<state>.+)," - "Port mode is (.+)" The example above will parse the output of ``show interface`` into a list of hashes. The network filters also support parsing the output of a CLI command using the TextFSM library. To parse the CLI output with TextFSM use the following filter:: {{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }} Use of the TextFSM filter requires the TextFSM library to be installed. Network XML filters ------------------- .. versionadded:: 2.5 To convert the XML output of a network device command into structured JSON output, use the ``parse_xml`` filter:: {{ output | ansible.netcommon.parse_xml('path/to/spec') }} The ``parse_xml`` filter will load the spec file and pass the command output through formatted as JSON. The spec file should be valid formatted YAML. It defines how to parse the XML output and return JSON data. Below is an example of a valid spec file that will parse the output from the ``show vlan | display xml`` command. .. code-block:: yaml --- vars: vlan: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" desc: "{{ item.desc }}" enabled: "{{ item.state.get('inactive') != 'inactive' }}" state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}" keys: vlans: value: "{{ vlan }}" top: configuration/vlans/vlan items: vlan_id: vlan-id name: name desc: description state: ".[@inactive='inactive']" The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information. The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same ``show vlan | display xml`` command. .. code-block:: yaml --- vars: vlan: key: "{{ item.vlan_id }}" values: vlan_id: "{{ item.vlan_id }}" name: "{{ item.name }}" desc: "{{ item.desc }}" enabled: "{{ item.state.get('inactive') != 'inactive' }}" state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}" keys: vlans: value: "{{ vlan }}" top: configuration/vlans/vlan items: vlan_id: vlan-id name: name desc: description state: ".[@inactive='inactive']" The value of ``top`` is the XPath relative to the XML root node. In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``, which is an XPath expression relative to the root node (<rpc-reply>). ``configuration`` in the value of ``top`` is the outer most container node, and ``vlan`` is the inner-most container node. ``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``. For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the relative to the value of XPath in ``top`` Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:: <rpc-reply> <configuration> <vlans> <vlan inactive="inactive"> <name>vlan-1</name> <vlan-id>200</vlan-id> <description>This is vlan-1</description> </vlan> </vlans> </configuration> </rpc-reply> .. note:: For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_. Network VLAN filters -------------------- .. versionadded:: 2.8 Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties: * Vlans are listed in ascending order. * Three or more consecutive VLANs are listed with a dash. * The first line of the list can be first_line_len characters long. * Subsequent list lines can be other_line_len characters. To sort a VLAN list:: {{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }} This example renders the following sorted list:: ['100,1688,3002-3005,3999'] Another example Jinja template:: {% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %} switchport trunk allowed vlan {{ parsed_vlans[0] }} {% for i in range (1, parsed_vlans | count) %} switchport trunk allowed vlan add {{ parsed_vlans[i] }} {% endfor %} This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration. .. _hash_filters: Encrypting and checksumming strings and passwords ================================================= .. versionadded:: 1.9 To get the sha1 hash of a string:: {{ 'test1' | hash('sha1') }} # => "b444ac06613fc8d63795be9ad0beaf55011936ac" To get the md5 hash of a string:: {{ 'test1' | hash('md5') }} # => "5a105e8b9d40e1329780d62ea2265d8a" Get a string checksum:: {{ 'test2' | checksum }} # => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f" Other hashes (platform dependent):: {{ 'test2' | hash('blowfish') }} To get a sha512 password hash (random salt):: {{ 'passwordsaresecret' | password_hash('sha512') }} # => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/" To get a sha256 password hash with a specific salt:: {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }} # => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4" An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:: {{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }} # => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0" Hash types available depend on the control system running Ansible, 'hash' depends on `hashlib <https://docs.python.org/3.8/library/hashlib.html>`_, password_hash depends on `passlib <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html>`_. The `crypt <https://docs.python.org/3.8/library/crypt.html>`_ is used as a fallback if ``passlib`` is not installed. .. versionadded:: 2.7 Some hash types allow providing a rounds parameter:: {{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }} # => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7" .. _other_useful_filters: Manipulating text ================= Several filters work with text, including URLs, file names, and path names. .. _comment_filter: Adding comments to files ------------------------ The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:: {{ "Plain style (default)" | comment }} produces this output: .. code-block:: text # # Plain style (default) # Ansible offers styles for comments in C (``//...``), C block (``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):: {{ "C style" | comment('c') }} {{ "C block style" | comment('cblock') }} {{ "Erlang style" | comment('erlang') }} {{ "XML style" | comment('xml') }} You can define a custom comment character. This filter:: {{ "My Special Case" | comment(decoration="! ") }} produces: .. code-block:: text ! ! My Special Case ! You can fully customize the comment style:: {{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }} That creates the following output: .. code-block:: text ####### # # Custom style # ####### ### # The filter can also be applied to any Ansible variable. For example to make the output of the ``ansible_managed`` variable more readable, we can change the definition in the ``ansible.cfg`` file to this: .. code-block:: jinja [defaults] ansible_managed = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} and then use the variable with the `comment` filter:: {{ ansible_managed | comment }} which produces this output: .. code-block:: sh # # This file is managed by Ansible. # # template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2 # date: 2015-09-10 11:02:58 # user: ansible # host: myhost # URLEncode Variables ------------------- The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:: {{ 'Trollhättan' | urlencode }} # => 'Trollh%C3%A4ttan' Splitting URLs -------------- .. versionadded:: 2.4 The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:: {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }} # => 'www.acme.com' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }} # => 'user:[email protected]:9000' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }} # => 'user' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }} # => 'password' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }} # => '/dir/index.html' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }} # => '9000' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }} # => 'http' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }} # => 'query=term' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }} # => 'fragment' {{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }} # => # { # "fragment": "fragment", # "hostname": "www.acme.com", # "netloc": "user:[email protected]:9000", # "password": "password", # "path": "/dir/index.html", # "port": 9000, # "query": "query=term", # "scheme": "http", # "username": "user" # } Searching strings with regular expressions ------------------------------------------ To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:: # Extracts the database name from a string {{ 'server1/database42' | regex_search('database[0-9]+') }} # => 'database42' # Returns an empty string if it cannot find a match {{ 'ansible' | regex_search('foobar') }} # => '' # Example for a case insensitive search in multiline mode {{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }} # => 'BAR' # Extracts server and database id from a string {{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }} # => ['1', '42'] # Extracts dividend and divisor from a division {{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }} # => ['21', '42'] To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:: # Returns a list of all IPv4 addresses in the string {{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }} # => ['8.8.8.8', '8.8.4.4'] # Returns all lines that end with "ar" {{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }} # => ['CAR', 'tar', 'bar'] To replace text in a string with regex, use the ``regex_replace`` filter:: # Convert "ansible" to "able" {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }} # => 'able' # Convert "foobar" to "bar" {{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }} # => 'bar' # Convert "localhost:80" to "localhost, 80" using named groups {{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }} # => 'localhost, 80' # Convert "localhost:80" to "localhost" {{ 'localhost:80' | regex_replace(':80') }} # => 'localhost' # Comment all lines that end with "ar" {{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }} # => '#CAR\n#tar\nfoo\n#bar\n' .. note:: If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:: # add "https://" prefix to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }} {{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }} {{ hosts | map('regex_replace', '^', 'https://') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }} # append ':80' to each item in a list GOOD: {{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }} {{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }} {{ hosts | map('regex_replace', '$', ':80') | list }} BAD: {{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }} .. note:: Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``). .. versionadded:: 2.0 To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):: # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$' {{ '^f.*o(.*)$' | regex_escape() }} .. versionadded:: 2.8 To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:: # convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$' {{ '^f.*o(.*)$' | regex_escape('posix_basic') }} Managing file names and path names ---------------------------------- To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':: {{ path | basename }} To get the last name of a windows style file path (new in version 2.0):: {{ path | win_basename }} To separate the windows drive letter from the rest of a file path (new in version 2.0):: {{ path | win_splitdrive }} To get only the windows drive letter:: {{ path | win_splitdrive | first }} To get the rest of the path without the drive letter:: {{ path | win_splitdrive | last }} To get the directory from a path:: {{ path | dirname }} To get the directory from a windows path (new version 2.0):: {{ path | win_dirname }} To expand a path containing a tilde (`~`) character (new in version 1.5):: {{ path | expanduser }} To expand a path containing environment variables:: {{ path | expandvars }} .. note:: `expandvars` expands local variables; using it on remote paths can lead to errors. .. versionadded:: 2.6 To get the real path of a link (new in version 1.8):: {{ path | realpath }} To get the relative path of a link, from a start point (new in version 1.7):: {{ path | relpath('/etc') }} To get the root and extension of a path or file name (new in version 2.0):: # with path == 'nginx.conf' the return would be ('nginx', '.conf') {{ path | splitext }} The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:: # with path == 'nginx.conf' the return would be 'nginx' {{ path | splitext | first }} # with path == 'nginx.conf' the return would be '.conf' {{ path | splitext | last }} To join one or more path components:: {{ ('/etc', path, 'subdir', file) | path_join }} .. versionadded:: 2.10 Manipulating strings ==================== To add quotes for shell usage:: - name: Run a shell command ansible.builtin.shell: echo {{ string_value | quote }} To concatenate a list into a string:: {{ list | join(" ") }} To split a sting into a list:: {{ csv_string | split(",") }} .. versionadded:: 2.11 To work with Base64 encoded strings:: {{ encoded | b64decode }} {{ decoded | string | b64encode }} As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:: {{ encoded | b64decode(encoding='utf-16-le') }} {{ decoded | string | b64encode(encoding='utf-16-le') }} .. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded. .. versionadded:: 2.6 Managing UUIDs ============== To create a namespaced UUIDv5:: {{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }} .. versionadded:: 2.10 To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':: {{ string | to_uuid }} .. versionadded:: 1.9 To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:jinja-filters.map>`:: # get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host {{ ansible_mounts | map(attribute='mount') | join(',') }} Handling dates and times ======================== To get a date object from a string use the `to_datetime` filter:: # Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }} # Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds() {{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }} # This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds # get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds {{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }} .. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior. .. versionadded:: 2.4 To format a date using a string (like with the shell date command), use the "strftime" filter:: # Display year-month-day {{ '%Y-%m-%d' | strftime }} # => "2021-03-19" # Display hour:min:sec {{ '%H:%M:%S' | strftime }} # => "21:51:04" # Use ansible_date_time.epoch fact {{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }} # => "2021-03-19 21:54:09" # Use arbitrary epoch value {{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01 {{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04 .. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime Getting Kubernetes resource names ================================= .. note:: These filters have migrated to the `kuberernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection. Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret, including its hash:: {{ configmap_resource_definition | kuberernetes.core.k8s_config_resource_name }} This can then be used to reference hashes in Pod specifications:: my_secret: kind: Secret metadata: name: my_secret_name deployment_resource: kind: Deployment spec: template: spec: containers: - envFrom: - secretRef: name: {{ my_secret | kuberernetes.core.k8s_config_resource_name }} .. versionadded:: 2.8 .. _PyYAML library: https://pyyaml.org/ .. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation .. seealso:: :ref:`about_playbooks` An introduction to playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables :ref:`playbooks_loops` Looping in playbooks :ref:`playbooks_reuse_roles` Playbook organization by roles :ref:`playbooks_best_practices` Tips and tricks for playbooks `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
lib/ansible/plugins/filter/core.py
# (c) 2012, Jeroen Hoekx <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import base64 import glob import hashlib import json import ntpath import os.path import re import sys import time import uuid import yaml import datetime from functools import partial from random import Random, SystemRandom, shuffle from jinja2.filters import environmentfilter, do_groupby as _do_groupby from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleFilterTypeError from ansible.module_utils.six import string_types, integer_types, reraise, text_type from ansible.module_utils.six.moves import shlex_quote from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common._collections_compat import Mapping from ansible.module_utils.common.yaml import yaml_load, yaml_load_all from ansible.parsing.ajson import AnsibleJSONEncoder from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.template import recursive_check_defined from ansible.utils.display import Display from ansible.utils.encrypt import passlib_or_crypt from ansible.utils.hashing import md5s, checksum_s from ansible.utils.unicode import unicode_wrap from ansible.utils.vars import merge_hash display = Display() UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E') def to_yaml(a, *args, **kw): '''Make verbose, human readable yaml''' default_flow_style = kw.pop('default_flow_style', None) transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw) return to_text(transformed) def to_nice_yaml(a, indent=4, *args, **kw): '''Make verbose, human readable yaml''' transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw) return to_text(transformed) def to_json(a, *args, **kw): ''' Convert the value to JSON ''' return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw) def to_nice_json(a, indent=4, sort_keys=True, *args, **kw): '''Make verbose, human readable JSON''' return to_json(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), *args, **kw) def to_bool(a): ''' return a bool for the arg ''' if a is None or isinstance(a, bool): return a if isinstance(a, string_types): a = a.lower() if a in ('yes', 'on', '1', 'true', 1): return True return False def to_datetime(string, format="%Y-%m-%d %H:%M:%S"): return datetime.datetime.strptime(string, format) def strftime(string_format, second=None): ''' return a date string using string. See https://docs.python.org/2/library/time.html#time.strftime for format ''' if second is not None: try: second = float(second) except Exception: raise AnsibleFilterError('Invalid value for epoch value (%s)' % second) return time.strftime(string_format, time.localtime(second)) def quote(a): ''' return its argument quoted for shell usage ''' if a is None: a = u'' return shlex_quote(to_text(a)) def fileglob(pathname): ''' return list of matched regular files for glob ''' return [g for g in glob.glob(pathname) if os.path.isfile(g)] def regex_replace(value='', pattern='', replacement='', ignorecase=False, multiline=False): ''' Perform a `re.sub` returning a string ''' value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr') flags = 0 if ignorecase: flags |= re.I if multiline: flags |= re.M _re = re.compile(pattern, flags=flags) return _re.sub(replacement, value) def regex_findall(value, regex, multiline=False, ignorecase=False): ''' Perform re.findall and return the list of matches ''' value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr') flags = 0 if ignorecase: flags |= re.I if multiline: flags |= re.M return re.findall(regex, value, flags) def regex_search(value, regex, *args, **kwargs): ''' Perform re.search and return the list of matches or a backref ''' value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr') groups = list() for arg in args: if arg.startswith('\\g'): match = re.match(r'\\g<(\S+)>', arg).group(1) groups.append(match) elif arg.startswith('\\'): match = int(re.match(r'\\(\d+)', arg).group(1)) groups.append(match) else: raise AnsibleFilterError('Unknown argument') flags = 0 if kwargs.get('ignorecase'): flags |= re.I if kwargs.get('multiline'): flags |= re.M match = re.search(regex, value, flags) if match: if not groups: return match.group() else: items = list() for item in groups: items.append(match.group(item)) return items def ternary(value, true_val, false_val, none_val=None): ''' value ? true_val : false_val ''' if value is None and none_val is not None: return none_val elif bool(value): return true_val else: return false_val def regex_escape(string, re_type='python'): string = to_text(string, errors='surrogate_or_strict', nonstring='simplerepr') '''Escape all regular expressions special characters from STRING.''' if re_type == 'python': return re.escape(string) elif re_type == 'posix_basic': # list of BRE special chars: # https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions return regex_replace(string, r'([].[^$*\\])', r'\\\1') # TODO: implement posix_extended # It's similar to, but different from python regex, which is similar to, # but different from PCRE. It's possible that re.escape would work here. # https://remram44.github.io/regex-cheatsheet/regex.html#programs elif re_type == 'posix_extended': raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type) else: raise AnsibleFilterError('Invalid regex type (%s)' % re_type) def from_yaml(data): if isinstance(data, string_types): # The ``text_type`` call here strips any custom # string wrapper class, so that CSafeLoader can # read the data return yaml_load(text_type(to_text(data, errors='surrogate_or_strict'))) return data def from_yaml_all(data): if isinstance(data, string_types): # The ``text_type`` call here strips any custom # string wrapper class, so that CSafeLoader can # read the data return yaml_load_all(text_type(to_text(data, errors='surrogate_or_strict'))) return data @environmentfilter def rand(environment, end, start=None, step=None, seed=None): if seed is None: r = SystemRandom() else: r = Random(seed) if isinstance(end, integer_types): if not start: start = 0 if not step: step = 1 return r.randrange(start, end, step) elif hasattr(end, '__iter__'): if start or step: raise AnsibleFilterError('start and step can only be used with integer values') return r.choice(end) else: raise AnsibleFilterError('random can only be used on sequences and integers') def randomize_list(mylist, seed=None): try: mylist = list(mylist) if seed: r = Random(seed) r.shuffle(mylist) else: shuffle(mylist) except Exception: pass return mylist def get_hash(data, hashtype='sha1'): try: h = hashlib.new(hashtype) except Exception as e: # hash is not supported? raise AnsibleFilterError(e) h.update(to_bytes(data, errors='surrogate_or_strict')) return h.hexdigest() def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None): passlib_mapping = { 'md5': 'md5_crypt', 'blowfish': 'bcrypt', 'sha256': 'sha256_crypt', 'sha512': 'sha512_crypt', } hashtype = passlib_mapping.get(hashtype, hashtype) try: return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds) except AnsibleError as e: reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2]) def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE): uuid_namespace = namespace if not isinstance(uuid_namespace, uuid.UUID): try: uuid_namespace = uuid.UUID(namespace) except (AttributeError, ValueError) as e: raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e))) # uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3 return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict'))) def mandatory(a, msg=None): from jinja2.runtime import Undefined ''' Make a variable mandatory ''' if isinstance(a, Undefined): if a._undefined_name is not None: name = "'%s' " % to_text(a._undefined_name) else: name = '' if msg is not None: raise AnsibleFilterError(to_native(msg)) else: raise AnsibleFilterError("Mandatory variable %s not defined." % name) return a def combine(*terms, **kwargs): recursive = kwargs.pop('recursive', False) list_merge = kwargs.pop('list_merge', 'replace') if kwargs: raise AnsibleFilterError("'recursive' and 'list_merge' are the only valid keyword arguments") # allow the user to do `[dict1, dict2, ...] | combine` dictionaries = flatten(terms, levels=1) # recursively check that every elements are defined (for jinja2) recursive_check_defined(dictionaries) if not dictionaries: return {} if len(dictionaries) == 1: return dictionaries[0] # merge all the dicts so that the dict at the end of the array have precedence # over the dict at the beginning. # we merge the dicts from the highest to the lowest priority because there is # a huge probability that the lowest priority dict will be the biggest in size # (as the low prio dict will hold the "default" values and the others will be "patches") # and merge_hash create a copy of it's first argument. # so high/right -> low/left is more efficient than low/left -> high/right high_to_low_prio_dict_iterator = reversed(dictionaries) result = next(high_to_low_prio_dict_iterator) for dictionary in high_to_low_prio_dict_iterator: result = merge_hash(dictionary, result, recursive, list_merge) return result def comment(text, style='plain', **kw): # Predefined comment types comment_styles = { 'plain': { 'decoration': '# ' }, 'erlang': { 'decoration': '% ' }, 'c': { 'decoration': '// ' }, 'cblock': { 'beginning': '/*', 'decoration': ' * ', 'end': ' */' }, 'xml': { 'beginning': '<!--', 'decoration': ' - ', 'end': '-->' } } # Pointer to the right comment type style_params = comment_styles[style] if 'decoration' in kw: prepostfix = kw['decoration'] else: prepostfix = style_params['decoration'] # Default params p = { 'newline': '\n', 'beginning': '', 'prefix': (prepostfix).rstrip(), 'prefix_count': 1, 'decoration': '', 'postfix': (prepostfix).rstrip(), 'postfix_count': 1, 'end': '' } # Update default params p.update(style_params) p.update(kw) # Compose substrings for the final string str_beginning = '' if p['beginning']: str_beginning = "%s%s" % (p['beginning'], p['newline']) str_prefix = '' if p['prefix']: if p['prefix'] != p['newline']: str_prefix = str( "%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count']) else: str_prefix = str( "%s" % (p['newline'])) * int(p['prefix_count']) str_text = ("%s%s" % ( p['decoration'], # Prepend each line of the text with the decorator text.replace( p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace( # Remove trailing spaces when only decorator is on the line "%s%s" % (p['decoration'], p['newline']), "%s%s" % (p['decoration'].rstrip(), p['newline'])) str_postfix = p['newline'].join( [''] + [p['postfix'] for x in range(p['postfix_count'])]) str_end = '' if p['end']: str_end = "%s%s" % (p['newline'], p['end']) # Return the final string return "%s%s%s%s%s" % ( str_beginning, str_prefix, str_text, str_postfix, str_end) @environmentfilter def extract(environment, item, container, morekeys=None): if morekeys is None: keys = [item] elif isinstance(morekeys, list): keys = [item] + morekeys else: keys = [item, morekeys] value = container for key in keys: value = environment.getitem(value, key) return value @environmentfilter def do_groupby(environment, value, attribute): """Overridden groupby filter for jinja2, to address an issue with jinja2>=2.9.0,<2.9.5 where a namedtuple was returned which has repr that prevents ansible.template.safe_eval.safe_eval from being able to parse and eval the data. jinja2<2.9.0,>=2.9.5 is not affected, as <2.9.0 uses a tuple, and >=2.9.5 uses a standard tuple repr on the namedtuple. The adaptation here, is to run the jinja2 `do_groupby` function, and cast all of the namedtuples to a regular tuple. See https://github.com/ansible/ansible/issues/20098 We may be able to remove this in the future. """ return [tuple(t) for t in _do_groupby(environment, value, attribute)] def b64encode(string, encoding='utf-8'): return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict'))) def b64decode(string, encoding='utf-8'): return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding) def flatten(mylist, levels=None, skip_nulls=True): ret = [] for element in mylist: if skip_nulls and element in (None, 'None', 'null'): # ignore null items continue elif is_sequence(element): if levels is None: ret.extend(flatten(element, skip_nulls=skip_nulls)) elif levels >= 1: # decrement as we go down the stack ret.extend(flatten(element, levels=(int(levels) - 1), skip_nulls=skip_nulls)) else: ret.append(element) else: ret.append(element) return ret def subelements(obj, subelements, skip_missing=False): '''Accepts a dict or list of dicts, and a dotted accessor and produces a product of the element and the results of the dotted accessor >>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}] >>> subelements(obj, 'groups') [({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')] ''' if isinstance(obj, dict): element_list = list(obj.values()) elif isinstance(obj, list): element_list = obj[:] else: raise AnsibleFilterError('obj must be a list of dicts or a nested dict') if isinstance(subelements, list): subelement_list = subelements[:] elif isinstance(subelements, string_types): subelement_list = subelements.split('.') else: raise AnsibleFilterTypeError('subelements must be a list or a string') results = [] for element in element_list: values = element for subelement in subelement_list: try: values = values[subelement] except KeyError: if skip_missing: values = [] break raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values)) except TypeError: raise AnsibleFilterTypeError("the key %s should point to a dictionary, got '%s'" % (subelement, values)) if not isinstance(values, list): raise AnsibleFilterTypeError("the key %r should point to a list, got %r" % (subelement, values)) for value in values: results.append((element, value)) return results def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'): ''' takes a dictionary and transforms it into a list of dictionaries, with each having a 'key' and 'value' keys that correspond to the keys and values of the original ''' if not isinstance(mydict, Mapping): raise AnsibleFilterTypeError("dict2items requires a dictionary, got %s instead." % type(mydict)) ret = [] for key in mydict: ret.append({key_name: key, value_name: mydict[key]}) return ret def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'): ''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary, effectively as the reverse of dict2items ''' if not is_sequence(mylist): raise AnsibleFilterTypeError("items2dict requires a list, got %s instead." % type(mylist)) return dict((item[key_name], item[value_name]) for item in mylist) def path_join(paths): ''' takes a sequence or a string, and return a concatenation of the different members ''' if isinstance(paths, string_types): return os.path.join(paths) elif is_sequence(paths): return os.path.join(*paths) else: raise AnsibleFilterTypeError("|path_join expects string or sequence, got %s instead." % type(paths)) class FilterModule(object): ''' Ansible core jinja2 filters ''' def filters(self): return { # jinja2 overrides 'groupby': do_groupby, # base 64 'b64decode': b64decode, 'b64encode': b64encode, # uuid 'to_uuid': to_uuid, # json 'to_json': to_json, 'to_nice_json': to_nice_json, 'from_json': json.loads, # yaml 'to_yaml': to_yaml, 'to_nice_yaml': to_nice_yaml, 'from_yaml': from_yaml, 'from_yaml_all': from_yaml_all, # path 'basename': partial(unicode_wrap, os.path.basename), 'dirname': partial(unicode_wrap, os.path.dirname), 'expanduser': partial(unicode_wrap, os.path.expanduser), 'expandvars': partial(unicode_wrap, os.path.expandvars), 'path_join': path_join, 'realpath': partial(unicode_wrap, os.path.realpath), 'relpath': partial(unicode_wrap, os.path.relpath), 'splitext': partial(unicode_wrap, os.path.splitext), 'win_basename': partial(unicode_wrap, ntpath.basename), 'win_dirname': partial(unicode_wrap, ntpath.dirname), 'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive), # file glob 'fileglob': fileglob, # types 'bool': to_bool, 'to_datetime': to_datetime, # date formatting 'strftime': strftime, # quote string for shell usage 'quote': quote, # hash filters # md5 hex digest of string 'md5': md5s, # sha1 hex digest of string 'sha1': checksum_s, # checksum of string as used by ansible for checksumming files 'checksum': checksum_s, # generic hashing 'password_hash': get_encrypted_password, 'hash': get_hash, # regex 'regex_replace': regex_replace, 'regex_escape': regex_escape, 'regex_search': regex_search, 'regex_findall': regex_findall, # ? : ; 'ternary': ternary, # random stuff 'random': rand, 'shuffle': randomize_list, # undefined 'mandatory': mandatory, # comment-style decoration 'comment': comment, # debug 'type_debug': lambda o: o.__class__.__name__, # Data structures 'combine': combine, 'extract': extract, 'flatten': flatten, 'dict2items': dict_to_list_of_dict_key_value_elements, 'items2dict': list_of_dict_key_value_elements_to_dict, 'subelements': subelements, 'split': partial(unicode_wrap, text_type.split), }
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
lib/ansible/plugins/lookup/password.py
# (c) 2012, Daniel Hokka Zakrisson <[email protected]> # (c) 2013, Javier Candeira <[email protected]> # (c) 2013, Maykel Moya <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ name: password version_added: "1.1" author: - Daniel Hokka Zakrisson (!UNKNOWN) <[email protected]> - Javier Candeira (!UNKNOWN) <[email protected]> - Maykel Moya (!UNKNOWN) <[email protected]> short_description: retrieve or generate a random password, stored in a file description: - Generates a random plaintext password and stores it in a file at a given filepath. - If the file exists previously, it will retrieve its contents, behaving just like with_file. - 'Usage of variables like C("{{ inventory_hostname }}") in the filepath can be used to set up random passwords per host, which simplifies password management in C("host_vars") variables.' - A special case is using /dev/null as a path. The password lookup will generate a new random password each time, but will not write it to /dev/null. This can be used when you need a password without storing it on the controller. options: _terms: description: - path to the file that stores/will store the passwords required: True encrypt: description: - Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash; md5_crypt, bcrypt, sha256_crypt, sha512_crypt). - If not provided, the password will be returned in plain text. - Note that the password is always stored as plain text, only the returning password is encrypted. - Encrypt also forces saving the salt value for idempotence. - Note that before 2.6 this option was incorrectly labeled as a boolean for a long time. chars: version_added: "1.4" description: - Define comma separated list of names that compose a custom character set in the generated passwords. - 'By default generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9, and punctuation (". , : - _").' - "They can be either parts of Python's string module attributes or represented literally ( :, -)." - "Though string modules can vary by Python version, valid values for both major releases include: 'ascii_lowercase', 'ascii_uppercase', 'digits', 'hexdigits', 'octdigits', 'printable', 'punctuation' and 'whitespace'." - Be aware that Python's 'hexdigits' includes lower and upper case versions of a-f, so it is not a good choice as it doubles the chances of those values for systems that won't distinguish case, distorting the expected entropy. - "To enter comma use two commas ',,' somewhere - preferably at the end. Quotes and double quotes are not supported." type: string length: description: The length of the generated password. default: 20 type: integer notes: - A great alternative to the password lookup plugin, if you don't need to generate random passwords on a per-host basis, would be to use Vault in playbooks. Read the documentation there and consider using it first, it will be more desirable for most applications. - If the file already exists, no data will be written to it. If the file has contents, those contents will be read in as the password. Empty files cause the password to return as an empty string. - 'As all lookups, this runs on the Ansible host as the user running the playbook, and "become" does not apply, the target file must be readable by the playbook user, or, if it does not exist, the playbook user must have sufficient privileges to create it. (So, for example, attempts to write into areas such as /etc will fail unless the entire playbook is being run as root).' """ EXAMPLES = """ - name: create a mysql user with a random password mysql_user: name: "{{ client }}" password: "{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}" priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL" - name: create a mysql user with a random password using only ascii letters mysql_user: name: "{{ client }}" password: "{{ lookup('password', '/tmp/passwordfile chars=ascii_letters') }}" priv: '{{ client }}_{{ tier }}_{{ role }}.*:ALL' - name: create a mysql user with an 8 character random password using only digits mysql_user: name: "{{ client }}" password: "{{ lookup('password', '/tmp/passwordfile length=8 chars=digits') }}" priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL" - name: create a mysql user with a random password using many different char sets mysql_user: name: "{{ client }}" password: "{{ lookup('password', '/tmp/passwordfile chars=ascii_letters,digits,punctuation') }}" priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL" - name: create lowercase 8 character name for Kubernetes pod name set_fact: random_pod_name: "web-{{ lookup('password', '/dev/null chars=ascii_lowercase,digits length=8') }}" """ RETURN = """ _raw: description: - a password type: list elements: str """ import os import string import time import shutil import hashlib from ansible.errors import AnsibleError, AnsibleAssertionError from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.parsing.splitter import parse_kv from ansible.plugins.lookup import LookupBase from ansible.utils.encrypt import BaseHash, do_encrypt, random_password, random_salt from ansible.utils.path import makedirs_safe DEFAULT_LENGTH = 20 VALID_PARAMS = frozenset(('length', 'encrypt', 'chars')) def _parse_parameters(term): """Hacky parsing of params See https://github.com/ansible/ansible-modules-core/issues/1968#issuecomment-136842156 and the first_found lookup For how we want to fix this later """ first_split = term.split(' ', 1) if len(first_split) <= 1: # Only a single argument given, therefore it's a path relpath = term params = dict() else: relpath = first_split[0] params = parse_kv(first_split[1]) if '_raw_params' in params: # Spaces in the path? relpath = u' '.join((relpath, params['_raw_params'])) del params['_raw_params'] # Check that we parsed the params correctly if not term.startswith(relpath): # Likely, the user had a non parameter following a parameter. # Reject this as a user typo raise AnsibleError('Unrecognized value after key=value parameters given to password lookup') # No _raw_params means we already found the complete path when # we split it initially # Check for invalid parameters. Probably a user typo invalid_params = frozenset(params.keys()).difference(VALID_PARAMS) if invalid_params: raise AnsibleError('Unrecognized parameter(s) given to password lookup: %s' % ', '.join(invalid_params)) # Set defaults params['length'] = int(params.get('length', DEFAULT_LENGTH)) params['encrypt'] = params.get('encrypt', None) params['chars'] = params.get('chars', None) if params['chars']: tmp_chars = [] if u',,' in params['chars']: tmp_chars.append(u',') tmp_chars.extend(c for c in params['chars'].replace(u',,', u',').split(u',') if c) params['chars'] = tmp_chars else: # Default chars for password params['chars'] = [u'ascii_letters', u'digits', u".,:-_"] return relpath, params def _read_password_file(b_path): """Read the contents of a password file and return it :arg b_path: A byte string containing the path to the password file :returns: a text string containing the contents of the password file or None if no password file was present. """ content = None if os.path.exists(b_path): with open(b_path, 'rb') as f: b_content = f.read().rstrip() content = to_text(b_content, errors='surrogate_or_strict') return content def _gen_candidate_chars(characters): '''Generate a string containing all valid chars as defined by ``characters`` :arg characters: A list of character specs. The character specs are shorthand names for sets of characters like 'digits', 'ascii_letters', or 'punctuation' or a string to be included verbatim. The values of each char spec can be: * a name of an attribute in the 'strings' module ('digits' for example). The value of the attribute will be added to the candidate chars. * a string of characters. If the string isn't an attribute in 'string' module, the string will be directly added to the candidate chars. For example:: characters=['digits', '?|']`` will match ``string.digits`` and add all ascii digits. ``'?|'`` will add the question mark and pipe characters directly. Return will be the string:: u'0123456789?|' ''' chars = [] for chars_spec in characters: # getattr from string expands things like "ascii_letters" and "digits" # into a set of characters. chars.append(to_text(getattr(string, to_native(chars_spec), chars_spec), errors='strict')) chars = u''.join(chars).replace(u'"', u'').replace(u"'", u'') return chars def _parse_content(content): '''parse our password data format into password and salt :arg content: The data read from the file :returns: password and salt ''' password = content salt = None salt_slug = u' salt=' try: sep = content.rindex(salt_slug) except ValueError: # No salt pass else: salt = password[sep + len(salt_slug):] password = content[:sep] return password, salt def _format_content(password, salt, encrypt=None): """Format the password and salt for saving :arg password: the plaintext password to save :arg salt: the salt to use when encrypting a password :arg encrypt: Which method the user requests that this password is encrypted. Note that the password is saved in clear. Encrypt just tells us if we must save the salt value for idempotence. Defaults to None. :returns: a text string containing the formatted information .. warning:: Passwords are saved in clear. This is because the playbooks expect to get cleartext passwords from this lookup. """ if not encrypt and not salt: return password # At this point, the calling code should have assured us that there is a salt value. if not salt: raise AnsibleAssertionError('_format_content was called with encryption requested but no salt value') return u'%s salt=%s' % (password, salt) def _write_password_file(b_path, content): b_pathdir = os.path.dirname(b_path) makedirs_safe(b_pathdir, mode=0o700) with open(b_path, 'wb') as f: os.chmod(b_path, 0o600) b_content = to_bytes(content, errors='surrogate_or_strict') + b'\n' f.write(b_content) def _get_lock(b_path): """Get the lock for writing password file.""" first_process = False b_pathdir = os.path.dirname(b_path) lockfile_name = to_bytes("%s.ansible_lockfile" % hashlib.sha1(b_path).hexdigest()) lockfile = os.path.join(b_pathdir, lockfile_name) if not os.path.exists(lockfile) and b_path != to_bytes('/dev/null'): try: makedirs_safe(b_pathdir, mode=0o700) fd = os.open(lockfile, os.O_CREAT | os.O_EXCL) os.close(fd) first_process = True except OSError as e: if e.strerror != 'File exists': raise counter = 0 # if the lock is got by other process, wait until it's released while os.path.exists(lockfile) and not first_process: time.sleep(2 ** counter) if counter >= 2: raise AnsibleError("Password lookup cannot get the lock in 7 seconds, abort..." "This may caused by un-removed lockfile" "you can manually remove it from controller machine at %s and try again" % lockfile) counter += 1 return first_process, lockfile def _release_lock(lockfile): """Release the lock so other processes can read the password file.""" if os.path.exists(lockfile): os.remove(lockfile) class LookupModule(LookupBase): def run(self, terms, variables, **kwargs): ret = [] for term in terms: relpath, params = _parse_parameters(term) path = self._loader.path_dwim(relpath) b_path = to_bytes(path, errors='surrogate_or_strict') chars = _gen_candidate_chars(params['chars']) changed = None # make sure only one process finishes all the job first first_process, lockfile = _get_lock(b_path) content = _read_password_file(b_path) if content is None or b_path == to_bytes('/dev/null'): plaintext_password = random_password(params['length'], chars) salt = None changed = True else: plaintext_password, salt = _parse_content(content) encrypt = params['encrypt'] if encrypt and not salt: changed = True try: salt = random_salt(BaseHash.algorithms[encrypt].salt_size) except KeyError: salt = random_salt() if changed and b_path != to_bytes('/dev/null'): content = _format_content(plaintext_password, salt, encrypt=encrypt) _write_password_file(b_path, content) if first_process: # let other processes continue _release_lock(lockfile) if encrypt: password = do_encrypt(plaintext_password, encrypt, salt=salt) ret.append(password) else: ret.append(plaintext_password) return ret
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
lib/ansible/utils/encrypt.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import multiprocessing import random import re import string import sys from collections import namedtuple from ansible import constants as C from ansible.errors import AnsibleError, AnsibleAssertionError from ansible.module_utils.six import text_type from ansible.module_utils._text import to_text, to_bytes from ansible.utils.display import Display PASSLIB_E = CRYPT_E = None HAS_CRYPT = PASSLIB_AVAILABLE = False try: import passlib import passlib.hash from passlib.utils.handlers import HasRawSalt try: from passlib.utils.binary import bcrypt64 except ImportError: from passlib.utils import bcrypt64 PASSLIB_AVAILABLE = True except Exception as e: PASSLIB_E = e try: import crypt HAS_CRYPT = True except Exception as e: CRYPT_E = e display = Display() __all__ = ['do_encrypt'] _LOCK = multiprocessing.Lock() DEFAULT_PASSWORD_LENGTH = 20 def random_password(length=DEFAULT_PASSWORD_LENGTH, chars=C.DEFAULT_PASSWORD_CHARS): '''Return a random password string of length containing only chars :kwarg length: The number of characters in the new password. Defaults to 20. :kwarg chars: The characters to choose from. The default is all ascii letters, ascii digits, and these symbols ``.,:-_`` ''' if not isinstance(chars, text_type): raise AnsibleAssertionError('%s (%s) is not a text_type' % (chars, type(chars))) random_generator = random.SystemRandom() return u''.join(random_generator.choice(chars) for dummy in range(length)) def random_salt(length=8): """Return a text string suitable for use as a salt for the hash functions we use to encrypt passwords. """ # Note passlib salt values must be pure ascii so we can't let the user # configure this salt_chars = string.ascii_letters + string.digits + u'./' return random_password(length=length, chars=salt_chars) class BaseHash(object): algo = namedtuple('algo', ['crypt_id', 'salt_size', 'implicit_rounds', 'salt_exact']) algorithms = { 'md5_crypt': algo(crypt_id='1', salt_size=8, implicit_rounds=None, salt_exact=False), 'bcrypt': algo(crypt_id='2a', salt_size=22, implicit_rounds=None, salt_exact=True), 'sha256_crypt': algo(crypt_id='5', salt_size=16, implicit_rounds=5000, salt_exact=False), 'sha512_crypt': algo(crypt_id='6', salt_size=16, implicit_rounds=5000, salt_exact=False), } def __init__(self, algorithm): self.algorithm = algorithm class CryptHash(BaseHash): def __init__(self, algorithm): super(CryptHash, self).__init__(algorithm) if not HAS_CRYPT: raise AnsibleError("crypt.crypt cannot be used as the 'crypt' python library is not installed or is unusable.", orig_exc=CRYPT_E) if sys.platform.startswith('darwin'): raise AnsibleError("crypt.crypt not supported on Mac OS X/Darwin, install passlib python module") if algorithm not in self.algorithms: raise AnsibleError("crypt.crypt does not support '%s' algorithm" % self.algorithm) self.algo_data = self.algorithms[algorithm] def hash(self, secret, salt=None, salt_size=None, rounds=None): salt = self._salt(salt, salt_size) rounds = self._rounds(rounds) return self._hash(secret, salt, rounds) def _salt(self, salt, salt_size): salt_size = salt_size or self.algo_data.salt_size ret = salt or random_salt(salt_size) if re.search(r'[^./0-9A-Za-z]', ret): raise AnsibleError("invalid characters in salt") if self.algo_data.salt_exact and len(ret) != self.algo_data.salt_size: raise AnsibleError("invalid salt size") elif not self.algo_data.salt_exact and len(ret) > self.algo_data.salt_size: raise AnsibleError("invalid salt size") return ret def _rounds(self, rounds): if rounds == self.algo_data.implicit_rounds: # Passlib does not include the rounds if it is the same as implicit_rounds. # Make crypt lib behave the same, by not explicitly specifying the rounds in that case. return None else: return rounds def _hash(self, secret, salt, rounds): if rounds is None: saltstring = "$%s$%s" % (self.algo_data.crypt_id, salt) else: saltstring = "$%s$rounds=%d$%s" % (self.algo_data.crypt_id, rounds, salt) # crypt.crypt on Python < 3.9 returns None if it cannot parse saltstring # On Python >= 3.9, it throws OSError. try: result = crypt.crypt(secret, saltstring) orig_exc = None except OSError as e: result = None orig_exc = e # None as result would be interpreted by the some modules (user module) # as no password at all. if not result: raise AnsibleError( "crypt.crypt does not support '%s' algorithm" % self.algorithm, orig_exc=orig_exc, ) return result class PasslibHash(BaseHash): def __init__(self, algorithm): super(PasslibHash, self).__init__(algorithm) if not PASSLIB_AVAILABLE: raise AnsibleError("passlib must be installed and usable to hash with '%s'" % algorithm, orig_exc=PASSLIB_E) try: self.crypt_algo = getattr(passlib.hash, algorithm) except Exception: raise AnsibleError("passlib does not support '%s' algorithm" % algorithm) def hash(self, secret, salt=None, salt_size=None, rounds=None): salt = self._clean_salt(salt) rounds = self._clean_rounds(rounds) return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) def _clean_salt(self, salt): if not salt: return None elif issubclass(self.crypt_algo, HasRawSalt): ret = to_bytes(salt, encoding='ascii', errors='strict') else: ret = to_text(salt, encoding='ascii', errors='strict') # Ensure the salt has the correct padding if self.algorithm == 'bcrypt': ret = bcrypt64.repair_unused(ret) return ret def _clean_rounds(self, rounds): algo_data = self.algorithms.get(self.algorithm) if rounds: return rounds elif algo_data and algo_data.implicit_rounds: # The default rounds used by passlib depend on the passlib version. # For consistency ensure that passlib behaves the same as crypt in case no rounds were specified. # Thus use the crypt defaults. return algo_data.implicit_rounds else: return None def _hash(self, secret, salt, salt_size, rounds): # Not every hash algorithm supports every parameter. # Thus create the settings dict only with set parameters. settings = {} if salt: settings['salt'] = salt if salt_size: settings['salt_size'] = salt_size if rounds: settings['rounds'] = rounds # starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt' if hasattr(self.crypt_algo, 'hash'): result = self.crypt_algo.using(**settings).hash(secret) elif hasattr(self.crypt_algo, 'encrypt'): result = self.crypt_algo.encrypt(secret, **settings) else: raise AnsibleError("installed passlib version %s not supported" % passlib.__version__) # passlib.hash should always return something or raise an exception. # Still ensure that there is always a result. # Otherwise an empty password might be assumed by some modules, like the user module. if not result: raise AnsibleError("failed to hash with algorithm '%s'" % self.algorithm) # Hashes from passlib.hash should be represented as ascii strings of hex # digits so this should not traceback. If it's not representable as such # we need to traceback and then blacklist such algorithms because it may # impact calling code. return to_text(result, errors='strict') def passlib_or_crypt(secret, algorithm, salt=None, salt_size=None, rounds=None): if PASSLIB_AVAILABLE: return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) elif HAS_CRYPT: return CryptHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds) else: raise AnsibleError("Unable to encrypt nor hash, either crypt or passlib must be installed.", orig_exc=CRYPT_E) def do_encrypt(result, encrypt, salt_size=None, salt=None): return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt)
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
test/units/plugins/lookup/test_password.py
# -*- coding: utf-8 -*- # (c) 2015, Toshio Kuratomi <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type try: import passlib from passlib.handlers import pbkdf2 except ImportError: passlib = None pbkdf2 = None import pytest from units.mock.loader import DictDataLoader from units.compat import unittest from units.compat.mock import mock_open, patch from ansible.errors import AnsibleError from ansible.module_utils.six import text_type from ansible.module_utils.six.moves import builtins from ansible.module_utils._text import to_bytes from ansible.plugins.loader import PluginLoader from ansible.plugins.lookup import password DEFAULT_CHARS = sorted([u'ascii_letters', u'digits', u".,:-_"]) DEFAULT_CANDIDATE_CHARS = u'.,:-_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' # Currently there isn't a new-style old_style_params_data = ( # Simple case dict( term=u'/path/to/file', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), # Special characters in path dict( term=u'/path/with/embedded spaces and/file', filename=u'/path/with/embedded spaces and/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/with/equals/cn=com.ansible', filename=u'/path/with/equals/cn=com.ansible', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/with/unicode/くらとみ/file', filename=u'/path/with/unicode/くらとみ/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), # Mix several special chars dict( term=u'/path/with/utf 8 and spaces/くらとみ/file', filename=u'/path/with/utf 8 and spaces/くらとみ/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/with/encoding=unicode/くらとみ/file', filename=u'/path/with/encoding=unicode/くらとみ/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/with/encoding=unicode/くらとみ/and spaces file', filename=u'/path/with/encoding=unicode/くらとみ/and spaces file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), # Simple parameters dict( term=u'/path/to/file length=42', filename=u'/path/to/file', params=dict(length=42, encrypt=None, chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/to/file encrypt=pbkdf2_sha256', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt='pbkdf2_sha256', chars=DEFAULT_CHARS), candidate_chars=DEFAULT_CANDIDATE_CHARS, ), dict( term=u'/path/to/file chars=abcdefghijklmnop', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'abcdefghijklmnop']), candidate_chars=u'abcdefghijklmnop', ), dict( term=u'/path/to/file chars=digits,abc,def', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'digits', u'abc', u'def'])), candidate_chars=u'abcdef0123456789', ), # Including comma in chars dict( term=u'/path/to/file chars=abcdefghijklmnop,,digits', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'abcdefghijklmnop', u',', u'digits'])), candidate_chars=u',abcdefghijklmnop0123456789', ), dict( term=u'/path/to/file chars=,,', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u',']), candidate_chars=u',', ), # Including = in chars dict( term=u'/path/to/file chars=digits,=,,', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'digits', u'=', u','])), candidate_chars=u',=0123456789', ), dict( term=u'/path/to/file chars=digits,abc=def', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'digits', u'abc=def'])), candidate_chars=u'abc=def0123456789', ), # Including unicode in chars dict( term=u'/path/to/file chars=digits,くらとみ,,', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'digits', u'くらとみ', u','])), candidate_chars=u',0123456789くらとみ', ), # Including only unicode in chars dict( term=u'/path/to/file chars=くらとみ', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'くらとみ'])), candidate_chars=u'くらとみ', ), # Include ':' in path dict( term=u'/path/to/file_with:colon chars=ascii_letters,digits', filename=u'/path/to/file_with:colon', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=sorted([u'ascii_letters', u'digits'])), candidate_chars=u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', ), # Including special chars in both path and chars # Special characters in path dict( term=u'/path/with/embedded spaces and/file chars=abc=def', filename=u'/path/with/embedded spaces and/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'abc=def']), candidate_chars=u'abc=def', ), dict( term=u'/path/with/equals/cn=com.ansible chars=abc=def', filename=u'/path/with/equals/cn=com.ansible', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'abc=def']), candidate_chars=u'abc=def', ), dict( term=u'/path/with/unicode/くらとみ/file chars=くらとみ', filename=u'/path/with/unicode/くらとみ/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'くらとみ']), candidate_chars=u'くらとみ', ), ) class TestParseParameters(unittest.TestCase): def test(self): for testcase in old_style_params_data: filename, params = password._parse_parameters(testcase['term']) params['chars'].sort() self.assertEqual(filename, testcase['filename']) self.assertEqual(params, testcase['params']) def test_unrecognized_value(self): testcase = dict(term=u'/path/to/file chars=くらとみi sdfsdf', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'くらとみ']), candidate_chars=u'くらとみ') self.assertRaises(AnsibleError, password._parse_parameters, testcase['term']) def test_invalid_params(self): testcase = dict(term=u'/path/to/file chars=くらとみi somethign_invalid=123', filename=u'/path/to/file', params=dict(length=password.DEFAULT_LENGTH, encrypt=None, chars=[u'くらとみ']), candidate_chars=u'くらとみ') self.assertRaises(AnsibleError, password._parse_parameters, testcase['term']) class TestReadPasswordFile(unittest.TestCase): def setUp(self): self.os_path_exists = password.os.path.exists def tearDown(self): password.os.path.exists = self.os_path_exists def test_no_password_file(self): password.os.path.exists = lambda x: False self.assertEqual(password._read_password_file(b'/nonexistent'), None) def test_with_password_file(self): password.os.path.exists = lambda x: True with patch.object(builtins, 'open', mock_open(read_data=b'Testing\n')) as m: self.assertEqual(password._read_password_file(b'/etc/motd'), u'Testing') class TestGenCandidateChars(unittest.TestCase): def _assert_gen_candidate_chars(self, testcase): expected_candidate_chars = testcase['candidate_chars'] params = testcase['params'] chars_spec = params['chars'] res = password._gen_candidate_chars(chars_spec) self.assertEqual(res, expected_candidate_chars) def test_gen_candidate_chars(self): for testcase in old_style_params_data: self._assert_gen_candidate_chars(testcase) class TestRandomPassword(unittest.TestCase): def _assert_valid_chars(self, res, chars): for res_char in res: self.assertIn(res_char, chars) def test_default(self): res = password.random_password() self.assertEqual(len(res), password.DEFAULT_LENGTH) self.assertTrue(isinstance(res, text_type)) self._assert_valid_chars(res, DEFAULT_CANDIDATE_CHARS) def test_zero_length(self): res = password.random_password(length=0) self.assertEqual(len(res), 0) self.assertTrue(isinstance(res, text_type)) self._assert_valid_chars(res, u',') def test_just_a_common(self): res = password.random_password(length=1, chars=u',') self.assertEqual(len(res), 1) self.assertEqual(res, u',') def test_free_will(self): # A Rush and Spinal Tap reference twofer res = password.random_password(length=11, chars=u'a') self.assertEqual(len(res), 11) self.assertEqual(res, 'aaaaaaaaaaa') self._assert_valid_chars(res, u'a') def test_unicode(self): res = password.random_password(length=11, chars=u'くらとみ') self._assert_valid_chars(res, u'くらとみ') self.assertEqual(len(res), 11) def test_gen_password(self): for testcase in old_style_params_data: params = testcase['params'] candidate_chars = testcase['candidate_chars'] params_chars_spec = password._gen_candidate_chars(params['chars']) password_string = password.random_password(length=params['length'], chars=params_chars_spec) self.assertEqual(len(password_string), params['length'], msg='generated password=%s has length (%s) instead of expected length (%s)' % (password_string, len(password_string), params['length'])) for char in password_string: self.assertIn(char, candidate_chars, msg='%s not found in %s from chars spect %s' % (char, candidate_chars, params['chars'])) class TestParseContent(unittest.TestCase): def test_empty_password_file(self): plaintext_password, salt = password._parse_content(u'') self.assertEqual(plaintext_password, u'') self.assertEqual(salt, None) def test(self): expected_content = u'12345678' file_content = expected_content plaintext_password, salt = password._parse_content(file_content) self.assertEqual(plaintext_password, expected_content) self.assertEqual(salt, None) def test_with_salt(self): expected_content = u'12345678 salt=87654321' file_content = expected_content plaintext_password, salt = password._parse_content(file_content) self.assertEqual(plaintext_password, u'12345678') self.assertEqual(salt, u'87654321') class TestFormatContent(unittest.TestCase): def test_no_encrypt(self): self.assertEqual( password._format_content(password=u'hunter42', salt=u'87654321', encrypt=False), u'hunter42 salt=87654321') def test_no_encrypt_no_salt(self): self.assertEqual( password._format_content(password=u'hunter42', salt=None, encrypt=None), u'hunter42') def test_encrypt(self): self.assertEqual( password._format_content(password=u'hunter42', salt=u'87654321', encrypt='pbkdf2_sha256'), u'hunter42 salt=87654321') def test_encrypt_no_salt(self): self.assertRaises(AssertionError, password._format_content, u'hunter42', None, 'pbkdf2_sha256') class TestWritePasswordFile(unittest.TestCase): def setUp(self): self.makedirs_safe = password.makedirs_safe self.os_chmod = password.os.chmod password.makedirs_safe = lambda path, mode: None password.os.chmod = lambda path, mode: None def tearDown(self): password.makedirs_safe = self.makedirs_safe password.os.chmod = self.os_chmod def test_content_written(self): with patch.object(builtins, 'open', mock_open()) as m: password._write_password_file(b'/this/is/a/test/caf\xc3\xa9', u'Testing Café') m.assert_called_once_with(b'/this/is/a/test/caf\xc3\xa9', 'wb') m().write.assert_called_once_with(u'Testing Café\n'.encode('utf-8')) class BaseTestLookupModule(unittest.TestCase): def setUp(self): self.fake_loader = DictDataLoader({'/path/to/somewhere': 'sdfsdf'}) self.password_lookup = password.LookupModule(loader=self.fake_loader) self.os_path_exists = password.os.path.exists self.os_open = password.os.open password.os.open = lambda path, flag: None self.os_close = password.os.close password.os.close = lambda fd: None self.os_remove = password.os.remove password.os.remove = lambda path: None self.makedirs_safe = password.makedirs_safe password.makedirs_safe = lambda path, mode: None def tearDown(self): password.os.path.exists = self.os_path_exists password.os.open = self.os_open password.os.close = self.os_close password.os.remove = self.os_remove password.makedirs_safe = self.makedirs_safe class TestLookupModuleWithoutPasslib(BaseTestLookupModule): @patch.object(PluginLoader, '_get_paths') @patch('ansible.plugins.lookup.password._write_password_file') def test_no_encrypt(self, mock_get_paths, mock_write_file): mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three'] results = self.password_lookup.run([u'/path/to/somewhere'], None) # FIXME: assert something useful for result in results: assert len(result) == password.DEFAULT_LENGTH assert isinstance(result, text_type) @patch.object(PluginLoader, '_get_paths') @patch('ansible.plugins.lookup.password._write_password_file') def test_password_already_created_no_encrypt(self, mock_get_paths, mock_write_file): mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three'] password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere') with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m: results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None) for result in results: self.assertEqual(result, u'hunter42') @patch.object(PluginLoader, '_get_paths') @patch('ansible.plugins.lookup.password._write_password_file') def test_only_a(self, mock_get_paths, mock_write_file): mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three'] results = self.password_lookup.run([u'/path/to/somewhere chars=a'], None) for result in results: self.assertEqual(result, u'a' * password.DEFAULT_LENGTH) @patch('time.sleep') def test_lock_been_held(self, mock_sleep): # pretend the lock file is here password.os.path.exists = lambda x: True try: with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m: # should timeout here results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None) self.fail("Lookup didn't timeout when lock already been held") except AnsibleError: pass def test_lock_not_been_held(self): # pretend now there is password file but no lock password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere') try: with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m: # should not timeout here results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None) except AnsibleError: self.fail('Lookup timeouts when lock is free') for result in results: self.assertEqual(result, u'hunter42') @pytest.mark.skipif(passlib is None, reason='passlib must be installed to run these tests') class TestLookupModuleWithPasslib(BaseTestLookupModule): def setUp(self): super(TestLookupModuleWithPasslib, self).setUp() # Different releases of passlib default to a different number of rounds self.sha256 = passlib.registry.get_crypt_handler('pbkdf2_sha256') sha256_for_tests = pbkdf2.create_pbkdf2_hash("sha256", 32, 20000) passlib.registry.register_crypt_handler(sha256_for_tests, force=True) def tearDown(self): super(TestLookupModuleWithPasslib, self).tearDown() passlib.registry.register_crypt_handler(self.sha256, force=True) @patch.object(PluginLoader, '_get_paths') @patch('ansible.plugins.lookup.password._write_password_file') def test_encrypt(self, mock_get_paths, mock_write_file): mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three'] results = self.password_lookup.run([u'/path/to/somewhere encrypt=pbkdf2_sha256'], None) # pbkdf2 format plus hash expected_password_length = 76 for result in results: self.assertEqual(len(result), expected_password_length) # result should have 5 parts split by '$' str_parts = result.split('$', 5) # verify the result is parseable by the passlib crypt_parts = passlib.hash.pbkdf2_sha256.parsehash(result) # verify it used the right algo type self.assertEqual(str_parts[1], 'pbkdf2-sha256') self.assertEqual(len(str_parts), 5) # verify the string and parsehash agree on the number of rounds self.assertEqual(int(str_parts[2]), crypt_parts['rounds']) self.assertIsInstance(result, text_type) @patch.object(PluginLoader, '_get_paths') @patch('ansible.plugins.lookup.password._write_password_file') def test_password_already_created_encrypt(self, mock_get_paths, mock_write_file): mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three'] password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere') with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m: results = self.password_lookup.run([u'/path/to/somewhere chars=anything encrypt=pbkdf2_sha256'], None) for result in results: self.assertEqual(result, u'$pbkdf2-sha256$20000$ODc2NTQzMjE$Uikde0cv0BKaRaAXMrUQB.zvG4GmnjClwjghwIRf2gU')
closed
ansible/ansible
https://github.com/ansible/ansible
74,571
Support for choosing bcrypt version/ident with password_hash filter
### Summary While setting up sonarqube and automating setting the admin password directly in the database, I noticed that their bcrypt implementation is outdated and only supports the 2a version, while the password_hash bcrypt filter using passlib defaults to the latest version (2b). I'd like to set the ident parameter of passlib bcrypt function from the password_hash filter (like with rounds). Right now I'm doing this: ```yaml - name: encrypt password command: cmd: python3 - stdin: | from passlib.hash import bcrypt print(bcrypt.using(rounds=12,ident="2a").hash("{{ admin_password }}")) register: tmp ``` as a workaround. Obviously, sonarqube should upgrade their bcrypt version, but ansible should also be able to handle this. Passlib doc: https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html# ### Issue Type Feature Idea ### Component Name password_hash ### Additional Information ```yaml password: "{{ admin_password | password_hash('bcrypt', rounds=12, ident='2a') }}" ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74571
https://github.com/ansible/ansible/pull/74595
20ef733ee02ba688757998404c1926381356b031
1bd7dcf339dd8b6c50bc16670be2448a206f4fdb
2021-05-05T10:40:54Z
python
2021-05-24T15:46:37Z
test/units/utils/test_encrypt.py
# (c) 2018, Matthias Fuchs <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import sys import pytest from ansible.errors import AnsibleError, AnsibleFilterError from ansible.plugins.filter.core import get_encrypted_password from ansible.utils import encrypt class passlib_off(object): def __init__(self): self.orig = encrypt.PASSLIB_AVAILABLE def __enter__(self): encrypt.PASSLIB_AVAILABLE = False return self def __exit__(self, exception_type, exception_value, traceback): encrypt.PASSLIB_AVAILABLE = self.orig def assert_hash(expected, secret, algorithm, **settings): if encrypt.PASSLIB_AVAILABLE: assert encrypt.passlib_or_crypt(secret, algorithm, **settings) == expected assert encrypt.PasslibHash(algorithm).hash(secret, **settings) == expected else: assert encrypt.passlib_or_crypt(secret, algorithm, **settings) == expected with pytest.raises(AnsibleError) as excinfo: encrypt.PasslibHash(algorithm).hash(secret, **settings) assert excinfo.value.args[0] == "passlib must be installed and usable to hash with '%s'" % algorithm @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_encrypt_with_rounds_no_passlib(): with passlib_off(): assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=5000) assert_hash("$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=10000) assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678", rounds=5000) # If passlib is not installed. this is identical to the test_encrypt_with_rounds_no_passlib() test @pytest.mark.skipif(not encrypt.PASSLIB_AVAILABLE, reason='passlib must be installed to run this test') def test_encrypt_with_rounds(): assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=5000) assert_hash("$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/", secret="123", algorithm="sha256_crypt", salt="12345678", rounds=10000) assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678", rounds=5000) @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_encrypt_default_rounds_no_passlib(): with passlib_off(): assert_hash("$1$12345678$tRy4cXc3kmcfRZVj4iFXr/", secret="123", algorithm="md5_crypt", salt="12345678") assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678") assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678") assert encrypt.CryptHash("md5_crypt").hash("123") # If passlib is not installed. this is identical to the test_encrypt_default_rounds_no_passlib() test @pytest.mark.skipif(not encrypt.PASSLIB_AVAILABLE, reason='passlib must be installed to run this test') def test_encrypt_default_rounds(): assert_hash("$1$12345678$tRy4cXc3kmcfRZVj4iFXr/", secret="123", algorithm="md5_crypt", salt="12345678") assert_hash("$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7", secret="123", algorithm="sha256_crypt", salt="12345678") assert_hash("$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.", secret="123", algorithm="sha512_crypt", salt="12345678") assert encrypt.PasslibHash("md5_crypt").hash("123") @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_password_hash_filter_no_passlib(): with passlib_off(): assert not encrypt.PASSLIB_AVAILABLE assert get_encrypted_password("123", "md5", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" with pytest.raises(AnsibleFilterError): get_encrypted_password("123", "crypt16", salt="12") def test_password_hash_filter_passlib(): if not encrypt.PASSLIB_AVAILABLE: pytest.skip("passlib not available") with pytest.raises(AnsibleFilterError): get_encrypted_password("123", "sha257", salt="12345678") # Uses 5000 rounds by default for sha256 matching crypt behaviour assert get_encrypted_password("123", "sha256", salt="12345678") == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert get_encrypted_password("123", "sha256", salt="12345678", rounds=5000) == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert (get_encrypted_password("123", "sha256", salt="12345678", rounds=10000) == "$5$rounds=10000$12345678$JBinliYMFEcBeAXKZnLjenhgEhTmJBvZn3aR8l70Oy/") assert (get_encrypted_password("123", "sha512", salt="12345678", rounds=6000) == "$6$rounds=6000$12345678$l/fC67BdJwZrJ7qneKGP1b6PcatfBr0dI7W6JLBrsv8P1wnv/0pu4WJsWq5p6WiXgZ2gt9Aoir3MeORJxg4.Z/") assert (get_encrypted_password("123", "sha512", salt="12345678", rounds=5000) == "$6$12345678$LcV9LQiaPekQxZ.OfkMADjFdSO2k9zfbDQrHPVcYjSLqSdjLYpsgqviYvTEP/R41yPmhH3CCeEDqVhW1VHr3L.") assert get_encrypted_password("123", "crypt16", salt="12") == "12pELHK2ME3McUFlHxel6uMM" # Try algorithm that uses a raw salt assert get_encrypted_password("123", "pbkdf2_sha256") @pytest.mark.skipif(sys.platform.startswith('darwin'), reason='macOS requires passlib') def test_do_encrypt_no_passlib(): with passlib_off(): assert not encrypt.PASSLIB_AVAILABLE assert encrypt.do_encrypt("123", "md5_crypt", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" with pytest.raises(AnsibleError): encrypt.do_encrypt("123", "crypt16", salt="12") def test_do_encrypt_passlib(): if not encrypt.PASSLIB_AVAILABLE: pytest.skip("passlib not available") with pytest.raises(AnsibleError): encrypt.do_encrypt("123", "sha257_crypt", salt="12345678") # Uses 5000 rounds by default for sha256 matching crypt behaviour. assert encrypt.do_encrypt("123", "sha256_crypt", salt="12345678") == "$5$12345678$uAZsE3BenI2G.nA8DpTl.9Dc8JiqacI53pEqRr5ppT7" assert encrypt.do_encrypt("123", "md5_crypt", salt="12345678") == "$1$12345678$tRy4cXc3kmcfRZVj4iFXr/" assert encrypt.do_encrypt("123", "crypt16", salt="12") == "12pELHK2ME3McUFlHxel6uMM" def test_random_salt(): res = encrypt.random_salt() expected_salt_candidate_chars = u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789./' assert len(res) == 8 for res_char in res: assert res_char in expected_salt_candidate_chars def test_invalid_crypt_salt(): pytest.raises( AnsibleError, encrypt.CryptHash('bcrypt')._salt, '_', None ) encrypt.CryptHash('bcrypt')._salt('1234567890123456789012', None) pytest.raises( AnsibleError, encrypt.CryptHash('bcrypt')._salt, 'kljsdf', None ) encrypt.CryptHash('sha256_crypt')._salt('123456', None) pytest.raises( AnsibleError, encrypt.CryptHash('sha256_crypt')._salt, '1234567890123456789012', None ) def test_passlib_bcrypt_salt(recwarn): passlib_exc = pytest.importorskip("passlib.exc") secret = 'foo' salt = '1234567890123456789012' repaired_salt = '123456789012345678901u' expected = '$2b$12$123456789012345678901uMv44x.2qmQeefEGb3bcIRc1mLuO7bqa' p = encrypt.PasslibHash('bcrypt') result = p.hash(secret, salt=salt) passlib_warnings = [w.message for w in recwarn if isinstance(w.message, passlib_exc.PasslibHashWarning)] assert len(passlib_warnings) == 0 assert result == expected recwarn.clear() result = p.hash(secret, salt=repaired_salt) assert result == expected
closed
ansible/ansible
https://github.com/ansible/ansible
73,503
ansible.builtin.dnf: installing by filename errors with "No group ... available"
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When asked to install `name: /usr/bin/cowsay`, the DNF plugin errors with `No group usr/bin/cowsay available.`. The error message is wrong (there are no groups in sight). It would be quite useful if Ansible installed a package with `/usr/bin/cowsay`. ##### ISSUE TYPE - ~Bug Report~ - Feature Idea ##### COMPONENT NAME ansible.builtin.dnf ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.17 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/pviktori/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible executable location = /usr/bin/ansible python version = 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` (it's empty) ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Fedora 33 (Workstation Edition) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> playbook.yml: ```yaml - hosts: localhost become_user: root tasks: - name: Install cowsay become: yes dnf: state: latest name: /usr/bin/cowsay ``` Without having `cowsay` installed, run `ansible-playbook playbook.yml -K` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS At least the error message is wrong; it shouldn't mention groups. But from [a comment in the source](https://github.com/ansible/ansible/blob/44ee04bd1f7d683fce246c16e752ace04d244b4c/lib/ansible/modules/dnf.py#L829), it looks like the package with `/usr/bin/cowsay` should be installed. (I don't care what the exact package name is.) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> ``` ... TASK [Install cowsay] ************************************************************************************ fatal: [localhost]: FAILED! => {"changed": false, "msg": "No group usr/bin/cowsay available.", "results": []} ... ``` <!--- Paste verbatim command output between quotes --> <details> <summary>Full output with extra verbosity</summary> ```paste below $ ansible-playbook playbook.yml -K -vvvv ansible-playbook 2.9.17 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/pviktori/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] Using /etc/ansible/ansible.cfg as config file BECOME password: setting up inventory plugins host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/default.py Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: playbook.yml *********************************************************************************** Positional arguments: playbook.yml verbosity: 4 connection: smart timeout: 10 become_method: sudo become_ask_pass: True tags: ('all',) inventory: ('/etc/ansible/hosts',) forks: 5 1 plays in playbook.yml PLAY [localhost] ***************************************************************************************** TASK [Gathering Facts] *********************************************************************************** task path: /home/pviktori/dev/one-offs/reproducer/ansible/playbook.yml:1 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: pviktori <127.0.0.1> EXEC /bin/sh -c 'echo ~pviktori && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pviktori/.ansible/tmp `"&& mkdir "` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968 `" && echo ansible-tmp-1612533793.5579023-739319-43501536054968="` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968 `" ) && sleep 0' Using module file /usr/lib/python3.9/site-packages/ansible/modules/system/setup.py <127.0.0.1> PUT /home/pviktori/.ansible/tmp/ansible-local-739308mmzkf2pf/tmpeuskv_a5 TO /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/ /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/ > /dev/null 2>&1 && sleep 0' ok: [localhost] META: ran handlers TASK [Install cowsay] ************************************************************************************ task path: /home/pviktori/dev/one-offs/reproducer/ansible/playbook.yml:4 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: pviktori <127.0.0.1> EXEC /bin/sh -c 'echo ~pviktori && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pviktori/.ansible/tmp `"&& mkdir "` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201 `" && echo ansible-tmp-1612533794.5429337-739396-272813355272201="` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201 `" ) && sleep 0' Using module file /usr/lib/python3.9/site-packages/ansible/modules/packaging/os/dnf.py <127.0.0.1> PUT /home/pviktori/.ansible/tmp/ansible-local-739308mmzkf2pf/tmphh11llf1 TO /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/ /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=jthmjasmjwuvaloyjtzuddwvrhdiohqn] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-jthmjasmjwuvaloyjtzuddwvrhdiohqn ; /usr/bin/python3 /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py'"'"' && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/ > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "autoremove": false, "bugfix": false, "conf_file": null, "disable_excludes": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_dir": null, "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "install_repoquery": true, "install_weak_deps": true, "installroot": "/", "list": null, "lock_timeout": 30, "name": [ "/usr/bin/cowsay" ], "releasever": null, "security": false, "skip_broken": false, "state": "latest", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "No group usr/bin/cowsay available.", "results": [] } PLAY RECAP *********************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` </details>
https://github.com/ansible/ansible/issues/73503
https://github.com/ansible/ansible/pull/74764
12f4b0db041869e2d96b07b3d6b99ac84934a96a
52430d42285735d6cdc45d7abed6bc99b2391dd5
2021-02-05T14:12:59Z
python
2021-05-24T17:02:28Z
changelogs/fragments/73503_dnf_whatprovides.yml
closed
ansible/ansible
https://github.com/ansible/ansible
73,503
ansible.builtin.dnf: installing by filename errors with "No group ... available"
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY When asked to install `name: /usr/bin/cowsay`, the DNF plugin errors with `No group usr/bin/cowsay available.`. The error message is wrong (there are no groups in sight). It would be quite useful if Ansible installed a package with `/usr/bin/cowsay`. ##### ISSUE TYPE - ~Bug Report~ - Feature Idea ##### COMPONENT NAME ansible.builtin.dnf ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.17 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/pviktori/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible executable location = /usr/bin/ansible python version = 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` (it's empty) ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Fedora 33 (Workstation Edition) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> playbook.yml: ```yaml - hosts: localhost become_user: root tasks: - name: Install cowsay become: yes dnf: state: latest name: /usr/bin/cowsay ``` Without having `cowsay` installed, run `ansible-playbook playbook.yml -K` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS At least the error message is wrong; it shouldn't mention groups. But from [a comment in the source](https://github.com/ansible/ansible/blob/44ee04bd1f7d683fce246c16e752ace04d244b4c/lib/ansible/modules/dnf.py#L829), it looks like the package with `/usr/bin/cowsay` should be installed. (I don't care what the exact package name is.) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> ``` ... TASK [Install cowsay] ************************************************************************************ fatal: [localhost]: FAILED! => {"changed": false, "msg": "No group usr/bin/cowsay available.", "results": []} ... ``` <!--- Paste verbatim command output between quotes --> <details> <summary>Full output with extra verbosity</summary> ```paste below $ ansible-playbook playbook.yml -K -vvvv ansible-playbook 2.9.17 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/pviktori/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 3.9.1 (default, Jan 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] Using /etc/ansible/ansible.cfg as config file BECOME password: setting up inventory plugins host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.9/site-packages/ansible/plugins/callback/default.py Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: playbook.yml *********************************************************************************** Positional arguments: playbook.yml verbosity: 4 connection: smart timeout: 10 become_method: sudo become_ask_pass: True tags: ('all',) inventory: ('/etc/ansible/hosts',) forks: 5 1 plays in playbook.yml PLAY [localhost] ***************************************************************************************** TASK [Gathering Facts] *********************************************************************************** task path: /home/pviktori/dev/one-offs/reproducer/ansible/playbook.yml:1 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: pviktori <127.0.0.1> EXEC /bin/sh -c 'echo ~pviktori && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pviktori/.ansible/tmp `"&& mkdir "` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968 `" && echo ansible-tmp-1612533793.5579023-739319-43501536054968="` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968 `" ) && sleep 0' Using module file /usr/lib/python3.9/site-packages/ansible/modules/system/setup.py <127.0.0.1> PUT /home/pviktori/.ansible/tmp/ansible-local-739308mmzkf2pf/tmpeuskv_a5 TO /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/ /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/AnsiballZ_setup.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/pviktori/.ansible/tmp/ansible-tmp-1612533793.5579023-739319-43501536054968/ > /dev/null 2>&1 && sleep 0' ok: [localhost] META: ran handlers TASK [Install cowsay] ************************************************************************************ task path: /home/pviktori/dev/one-offs/reproducer/ansible/playbook.yml:4 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: pviktori <127.0.0.1> EXEC /bin/sh -c 'echo ~pviktori && sleep 0' <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pviktori/.ansible/tmp `"&& mkdir "` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201 `" && echo ansible-tmp-1612533794.5429337-739396-272813355272201="` echo /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201 `" ) && sleep 0' Using module file /usr/lib/python3.9/site-packages/ansible/modules/packaging/os/dnf.py <127.0.0.1> PUT /home/pviktori/.ansible/tmp/ansible-local-739308mmzkf2pf/tmphh11llf1 TO /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/ /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=jthmjasmjwuvaloyjtzuddwvrhdiohqn] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-jthmjasmjwuvaloyjtzuddwvrhdiohqn ; /usr/bin/python3 /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/AnsiballZ_dnf.py'"'"' && sleep 0' <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/pviktori/.ansible/tmp/ansible-tmp-1612533794.5429337-739396-272813355272201/ > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "allow_downgrade": false, "autoremove": false, "bugfix": false, "conf_file": null, "disable_excludes": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_dir": null, "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "install_repoquery": true, "install_weak_deps": true, "installroot": "/", "list": null, "lock_timeout": 30, "name": [ "/usr/bin/cowsay" ], "releasever": null, "security": false, "skip_broken": false, "state": "latest", "update_cache": false, "update_only": false, "validate_certs": true } }, "msg": "No group usr/bin/cowsay available.", "results": [] } PLAY RECAP *********************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` </details>
https://github.com/ansible/ansible/issues/73503
https://github.com/ansible/ansible/pull/74764
12f4b0db041869e2d96b07b3d6b99ac84934a96a
52430d42285735d6cdc45d7abed6bc99b2391dd5
2021-02-05T14:12:59Z
python
2021-05-24T17:02:28Z
lib/ansible/modules/dnf.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2015 Cristian van Ee <cristian at cvee.org> # Copyright 2015 Igor Gnatenko <[email protected]> # Copyright 2018 Adam Miller <[email protected]> # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = ''' --- module: dnf version_added: 1.9 short_description: Manages packages with the I(dnf) package manager description: - Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager. options: name: description: - "A package name or package specifier with version, like C(name-1.0). When using state=latest, this can be '*' which means run: dnf -y update. You can also pass a url or a local path to a rpm file. To operate on several packages this can accept a comma separated string of packages or a list of packages." - Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0) required: true aliases: - pkg type: list elements: str list: description: - Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples. type: str state: description: - Whether to install (C(present), C(latest)), or remove (C(absent)) a package. - Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is enabled for this module, then C(absent) is inferred. choices: ['absent', 'present', 'installed', 'removed', 'latest'] type: str enablerepo: description: - I(Repoid) of repositories to enable for the install/update operation. These repos will not persist beyond the transaction. When specifying multiple repos, separate them with a ",". type: list elements: str disablerepo: description: - I(Repoid) of repositories to disable for the install/update operation. These repos will not persist beyond the transaction. When specifying multiple repos, separate them with a ",". type: list elements: str conf_file: description: - The remote dnf configuration file to use for the transaction. type: str disable_gpg_check: description: - Whether to disable the GPG checking of signatures of packages being installed. Has an effect only if state is I(present) or I(latest). - This setting affects packages installed from a repository as well as "local" packages installed from the filesystem or a URL. type: bool default: 'no' installroot: description: - Specifies an alternative installroot, relative to which all packages will be installed. version_added: "2.3" default: "/" type: str releasever: description: - Specifies an alternative release from which all packages will be installed. version_added: "2.6" type: str autoremove: description: - If C(yes), removes all "leaf" packages from the system that were originally installed as dependencies of user-installed packages but which are no longer required by any such package. Should be used alone or when state is I(absent) type: bool default: "no" version_added: "2.4" exclude: description: - Package name(s) to exclude when state=present, or latest. This can be a list or a comma separated string. version_added: "2.7" type: list elements: str skip_broken: description: - Skip packages with broken dependencies(devsolve) and are causing problems. type: bool default: "no" version_added: "2.7" update_cache: description: - Force dnf to check if cache is out of date and redownload if needed. Has an effect only if state is I(present) or I(latest). type: bool default: "no" aliases: [ expire-cache ] version_added: "2.7" update_only: description: - When using latest, only update installed packages. Do not install packages. - Has an effect only if state is I(latest) default: "no" type: bool version_added: "2.7" security: description: - If set to C(yes), and C(state=latest) then only installs updates that have been marked security related. - Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well. type: bool default: "no" version_added: "2.7" bugfix: description: - If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related. - Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well. default: "no" type: bool version_added: "2.7" enable_plugin: description: - I(Plugin) name to enable for the install/update operation. The enabled plugin will not persist beyond the transaction. version_added: "2.7" type: list elements: str disable_plugin: description: - I(Plugin) name to disable for the install/update operation. The disabled plugins will not persist beyond the transaction. version_added: "2.7" type: list elements: str disable_excludes: description: - Disable the excludes defined in DNF config files. - If set to C(all), disables all excludes. - If set to C(main), disable excludes defined in [main] in dnf.conf. - If set to C(repoid), disable excludes defined for given repo id. version_added: "2.7" type: str validate_certs: description: - This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated. - This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site. type: bool default: "yes" version_added: "2.7" allow_downgrade: description: - Specify if the named package and version is allowed to downgrade a maybe already installed higher version of that package. Note that setting allow_downgrade=True can make this module behave in a non-idempotent way. The task could end up with a set of packages that does not match the complete list of specified packages to install (because dependencies between the downgraded package and others can cause changes to the packages which were in the earlier transaction). type: bool default: "no" version_added: "2.7" install_repoquery: description: - This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature parity/compatibility with the I(yum) module. type: bool default: "yes" version_added: "2.7" download_only: description: - Only download the packages, do not install them. default: "no" type: bool version_added: "2.7" lock_timeout: description: - Amount of time to wait for the dnf lockfile to be freed. required: false default: 30 type: int version_added: "2.8" install_weak_deps: description: - Will also install all packages linked by a weak dependency relation. type: bool default: "yes" version_added: "2.8" download_dir: description: - Specifies an alternate directory to store packages. - Has an effect only if I(download_only) is specified. type: str version_added: "2.8" allowerasing: description: - If C(yes) it allows erasing of installed packages to resolve dependencies. required: false type: bool default: "no" version_added: "2.10" nobest: description: - Set best option to False, so that transactions are not limited to best candidates only. required: false type: bool default: "no" version_added: "2.11" cacheonly: description: - Tells dnf to run entirely from system cache; does not download or update metadata. type: bool default: "no" version_added: "2.12" notes: - When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option. - Group removal doesn't work if the group was installed with Ansible because upstream dnf's API doesn't properly mark groups as installed, therefore upon removal the module is unable to detect that the group is installed (https://bugzilla.redhat.com/show_bug.cgi?id=1620324) requirements: - "python >= 2.6" - python-dnf - for the autoremove option you need dnf >= 2.0.1" author: - Igor Gnatenko (@ignatenkobrain) <[email protected]> - Cristian van Ee (@DJMuggs) <cristian at cvee.org> - Berend De Schouwer (@berenddeschouwer) - Adam Miller (@maxamillion) <[email protected]> ''' EXAMPLES = ''' - name: Install the latest version of Apache dnf: name: httpd state: latest - name: Install Apache >= 2.4 dnf: name: httpd>=2.4 state: present - name: Install the latest version of Apache and MariaDB dnf: name: - httpd - mariadb-server state: latest - name: Remove the Apache package dnf: name: httpd state: absent - name: Install the latest version of Apache from the testing repo dnf: name: httpd enablerepo: testing state: present - name: Upgrade all packages dnf: name: "*" state: latest - name: Install the nginx rpm from a remote repo dnf: name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm' state: present - name: Install nginx rpm from a local file dnf: name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state: present - name: Install the 'Development tools' package group dnf: name: '@Development tools' state: present - name: Autoremove unneeded packages installed as dependencies dnf: autoremove: yes - name: Uninstall httpd but keep its dependencies dnf: name: httpd state: absent autoremove: no - name: Install a modularity appstream with defined stream and profile dnf: name: '@postgresql:9.6/client' state: present - name: Install a modularity appstream with defined stream dnf: name: '@postgresql:9.6' state: present - name: Install a modularity appstream with defined profile dnf: name: '@postgresql/client' state: present ''' import os import re import sys from ansible.module_utils._text import to_native, to_text from ansible.module_utils.urls import fetch_file from ansible.module_utils.six import PY2, text_type from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec try: import dnf import dnf.cli import dnf.const import dnf.exceptions import dnf.subject import dnf.util HAS_DNF = True except ImportError: HAS_DNF = False class DnfModule(YumDnf): """ DNF Ansible module back-end implementation """ def __init__(self, module): # This populates instance vars for all argument spec params super(DnfModule, self).__init__(module) self._ensure_dnf() self.lockfile = "/var/cache/dnf/*_lock.pid" self.pkg_mgr_name = "dnf" try: self.with_modules = dnf.base.WITH_MODULES except AttributeError: self.with_modules = False # DNF specific args that are not part of YumDnf self.allowerasing = self.module.params['allowerasing'] self.nobest = self.module.params['nobest'] def is_lockfile_pid_valid(self): # FIXME? it looks like DNF takes care of invalid lock files itself? # https://github.com/ansible/ansible/issues/57189 return True def _sanitize_dnf_error_msg_install(self, spec, error): """ For unhandled dnf.exceptions.Error scenarios, there are certain error messages we want to filter in an install scenario. Do that here. """ if ( to_text("no package matched") in to_text(error) or to_text("No match for argument:") in to_text(error) ): return "No package {0} available.".format(spec) return error def _sanitize_dnf_error_msg_remove(self, spec, error): """ For unhandled dnf.exceptions.Error scenarios, there are certain error messages we want to ignore in a removal scenario as known benign failures. Do that here. """ if ( 'no package matched' in to_native(error) or 'No match for argument:' in to_native(error) ): return (False, "{0} is not installed".format(spec)) # Return value is tuple of: # ("Is this actually a failure?", "Error Message") return (True, error) def _package_dict(self, package): """Return a dictionary of information for the package.""" # NOTE: This no longer contains the 'dnfstate' field because it is # already known based on the query type. result = { 'name': package.name, 'arch': package.arch, 'epoch': str(package.epoch), 'release': package.release, 'version': package.version, 'repo': package.repoid} result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format( **result) if package.installtime == 0: result['yumstate'] = 'available' else: result['yumstate'] = 'installed' return result def _packagename_dict(self, packagename): """ Return a dictionary of information for a package name string or None if the package name doesn't contain at least all NVR elements """ if packagename[-4:] == '.rpm': packagename = packagename[:-4] # This list was auto generated on a Fedora 28 system with the following one-liner # printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n' redhat_rpm_arches = [ "aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha", "alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel", "armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon", "geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el", "mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6", "noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64", "ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries", "riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v", "sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64" ] rpm_arch_re = re.compile(r'(.*)\.(.*)') rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)') try: arch = None rpm_arch_match = rpm_arch_re.match(packagename) if rpm_arch_match: nevr, arch = rpm_arch_match.groups() if arch in redhat_rpm_arches: packagename = nevr rpm_nevr_match = rpm_nevr_re.match(packagename) if rpm_nevr_match: name, epoch, version, release = rpm_nevr_re.match(packagename).groups() if not version or not version.split('.')[0].isdigit(): return None else: return None except AttributeError as e: self.module.fail_json( msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)), rc=1, results=[] ) if not epoch: epoch = "0" if ':' in name: epoch_name = name.split(":") epoch = epoch_name[0] name = ''.join(epoch_name[1:]) result = { 'name': name, 'epoch': epoch, 'release': release, 'version': version, } return result # Original implementation from yum.rpmUtils.miscutils (GPLv2+) # http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py def _compare_evr(self, e1, v1, r1, e2, v2, r2): # return 1: a is newer than b # 0: a and b are the same version # -1: b is newer than a if e1 is None: e1 = '0' else: e1 = str(e1) v1 = str(v1) r1 = str(r1) if e2 is None: e2 = '0' else: e2 = str(e2) v2 = str(v2) r2 = str(r2) # print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2) rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2)) # print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc) return rc def _ensure_dnf(self): if HAS_DNF: return system_interpreters = ['/usr/libexec/platform-python', '/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python'] if not has_respawned(): # probe well-known system Python locations for accessible bindings, favoring py3 interpreter = probe_interpreters_for_module(system_interpreters, 'dnf') if interpreter: # respawn under the interpreter where the bindings should be found respawn_module(interpreter) # end of the line for this module, the process will exit here once the respawned module completes # done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed) self.module.fail_json( msg="Could not import the dnf python module using {0} ({1}). " "Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the " "correct ansible_python_interpreter. (attempted {2})" .format(sys.executable, sys.version.replace('\n', ''), system_interpreters), results=[] ) def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'): """Configure the dnf Base object.""" conf = base.conf # Change the configuration file path if provided, this must be done before conf.read() is called if conf_file: # Fail if we can't read the configuration file. if not os.access(conf_file, os.R_OK): self.module.fail_json( msg="cannot read configuration file", conf_file=conf_file, results=[], ) else: conf.config_file_path = conf_file # Read the configuration file conf.read() # Turn off debug messages in the output conf.debuglevel = 0 # Set whether to check gpg signatures conf.gpgcheck = not disable_gpg_check conf.localpkg_gpgcheck = not disable_gpg_check # Don't prompt for user confirmations conf.assumeyes = True # Set installroot conf.installroot = installroot # Load substitutions from the filesystem conf.substitutions.update_from_etc(installroot) # Handle different DNF versions immutable mutable datatypes and # dnf v1/v2/v3 # # In DNF < 3.0 are lists, and modifying them works # In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work # In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work # # https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/ # # Set excludes if self.exclude: _excludes = list(conf.exclude) _excludes.extend(self.exclude) conf.exclude = _excludes # Set disable_excludes if self.disable_excludes: _disable_excludes = list(conf.disable_excludes) if self.disable_excludes not in _disable_excludes: _disable_excludes.append(self.disable_excludes) conf.disable_excludes = _disable_excludes # Set releasever if self.releasever is not None: conf.substitutions['releasever'] = self.releasever # Set skip_broken (in dnf this is strict=0) if self.skip_broken: conf.strict = 0 # Set best if self.nobest: conf.best = 0 if self.download_only: conf.downloadonly = True if self.download_dir: conf.destdir = self.download_dir if self.cacheonly: conf.cacheonly = True # Default in dnf upstream is true conf.clean_requirements_on_remove = self.autoremove # Default in dnf (and module default) is True conf.install_weak_deps = self.install_weak_deps def _specify_repositories(self, base, disablerepo, enablerepo): """Enable and disable repositories matching the provided patterns.""" base.read_all_repos() repos = base.repos # Disable repositories for repo_pattern in disablerepo: if repo_pattern: for repo in repos.get_matching(repo_pattern): repo.disable() # Enable repositories for repo_pattern in enablerepo: if repo_pattern: for repo in repos.get_matching(repo_pattern): repo.enable() def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot): """Return a fully configured dnf Base object.""" base = dnf.Base() self._configure_base(base, conf_file, disable_gpg_check, installroot) try: # this method has been supported in dnf-4.2.17-6 or later # https://bugzilla.redhat.com/show_bug.cgi?id=1788212 base.setup_loggers() except AttributeError: pass try: base.init_plugins(set(self.disable_plugin), set(self.enable_plugin)) base.pre_configure_plugins() except AttributeError: pass # older versions of dnf didn't require this and don't have these methods self._specify_repositories(base, disablerepo, enablerepo) try: base.configure_plugins() except AttributeError: pass # older versions of dnf didn't require this and don't have these methods try: if self.update_cache: try: base.update_cache() except dnf.exceptions.RepoError as e: self.module.fail_json( msg="{0}".format(to_text(e)), results=[], rc=1 ) base.fill_sack(load_system_repo='auto') except dnf.exceptions.RepoError as e: self.module.fail_json( msg="{0}".format(to_text(e)), results=[], rc=1 ) filters = [] if self.bugfix: key = {'advisory_type__eq': 'bugfix'} filters.append(base.sack.query().upgrades().filter(**key)) if self.security: key = {'advisory_type__eq': 'security'} filters.append(base.sack.query().upgrades().filter(**key)) if filters: base._update_security_filters = filters return base def list_items(self, command): """List package info based on the command.""" # Rename updates to upgrades if command == 'updates': command = 'upgrades' # Return the corresponding packages if command in ['installed', 'upgrades', 'available']: results = [ self._package_dict(package) for package in getattr(self.base.sack.query(), command)()] # Return the enabled repository ids elif command in ['repos', 'repositories']: results = [ {'repoid': repo.id, 'state': 'enabled'} for repo in self.base.repos.iter_enabled()] # Return any matching packages else: packages = dnf.subject.Subject(command).get_best_query(self.base.sack) results = [self._package_dict(package) for package in packages] self.module.exit_json(msg="", results=results) def _is_installed(self, pkg): installed = self.base.sack.query().installed() if installed.filter(name=pkg): return True else: return False def _is_newer_version_installed(self, pkg_name): candidate_pkg = self._packagename_dict(pkg_name) if not candidate_pkg: # The user didn't provide a versioned rpm, so version checking is # not required return False installed = self.base.sack.query().installed() installed_pkg = installed.filter(name=candidate_pkg['name']).run() if installed_pkg: installed_pkg = installed_pkg[0] # this looks weird but one is a dict and the other is a dnf.Package evr_cmp = self._compare_evr( installed_pkg.epoch, installed_pkg.version, installed_pkg.release, candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'], ) if evr_cmp == 1: return True else: return False else: return False def _mark_package_install(self, pkg_spec, upgrade=False): """Mark the package for install.""" is_newer_version_installed = self._is_newer_version_installed(pkg_spec) is_installed = self._is_installed(pkg_spec) try: if is_newer_version_installed: if self.allow_downgrade: # dnf only does allow_downgrade, we have to handle this ourselves # because it allows a possibility for non-idempotent transactions # on a system's package set (pending the yum repo has many old # NVRs indexed) if upgrade: if is_installed: self.base.upgrade(pkg_spec) else: self.base.install(pkg_spec) else: self.base.install(pkg_spec) else: # Nothing to do, report back pass elif is_installed: # An potentially older (or same) version is installed if upgrade: self.base.upgrade(pkg_spec) else: # Nothing to do, report back pass else: # The package is not installed, simply install it self.base.install(pkg_spec) return {'failed': False, 'msg': '', 'failure': '', 'rc': 0} except dnf.exceptions.MarkingError as e: return { 'failed': True, 'msg': "No package {0} available.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } except dnf.exceptions.DepsolveError as e: return { 'failed': True, 'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } except dnf.exceptions.Error as e: if to_text("already installed") in to_text(e): return {'failed': False, 'msg': '', 'failure': ''} else: return { 'failed': True, 'msg': "Unknown Error occurred for package {0}.".format(pkg_spec), 'failure': " ".join((pkg_spec, to_native(e))), 'rc': 1, "results": [] } def _whatprovides(self, filepath): available = self.base.sack.query().available() pkg_spec = available.filter(provides=filepath).run() if pkg_spec: return pkg_spec[0].name def _parse_spec_group_file(self): pkg_specs, grp_specs, module_specs, filenames = [], [], [], [] already_loaded_comps = False # Only load this if necessary, it's slow for name in self.names: if '://' in name: name = fetch_file(self.module, name) filenames.append(name) elif name.endswith(".rpm"): filenames.append(name) elif name.startswith("@") or ('/' in name): # like "dnf install /usr/bin/vi" if '/' in name: pkg_spec = self._whatprovides(name) if pkg_spec: pkg_specs.append(pkg_spec) continue if not already_loaded_comps: self.base.read_comps() already_loaded_comps = True grp_env_mdl_candidate = name[1:].strip() if self.with_modules: mdl = self.module_base._get_modules(grp_env_mdl_candidate) if mdl[0]: module_specs.append(grp_env_mdl_candidate) else: grp_specs.append(grp_env_mdl_candidate) else: grp_specs.append(grp_env_mdl_candidate) else: pkg_specs.append(name) return pkg_specs, grp_specs, module_specs, filenames def _update_only(self, pkgs): not_installed = [] for pkg in pkgs: if self._is_installed(pkg): try: if isinstance(to_text(pkg), text_type): self.base.upgrade(pkg) else: self.base.package_upgrade(pkg) except Exception as e: self.module.fail_json( msg="Error occurred attempting update_only operation: {0}".format(to_native(e)), results=[], rc=1, ) else: not_installed.append(pkg) return not_installed def _install_remote_rpms(self, filenames): if int(dnf.__version__.split(".")[0]) >= 2: pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True)) else: pkgs = [] try: for filename in filenames: pkgs.append(self.base.add_remote_rpm(filename)) except IOError as e: if to_text("Can not load RPM file") in to_text(e): self.module.fail_json( msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)), results=[], rc=1, ) if self.update_only: self._update_only(pkgs) else: for pkg in pkgs: try: if self._is_newer_version_installed(self._package_dict(pkg)['nevra']): if self.allow_downgrade: self.base.package_install(pkg) else: self.base.package_install(pkg) except Exception as e: self.module.fail_json( msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)), results=[], rc=1, ) def _is_module_installed(self, module_spec): if self.with_modules: module_spec = module_spec.strip() module_list, nsv = self.module_base._get_modules(module_spec) enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name) if enabled_streams: if nsv.stream: if nsv.stream in enabled_streams: return True # The provided stream was found else: return False # The provided stream was not found else: return True # No stream provided, but module found return False # seems like a sane default def ensure(self): response = { 'msg': "", 'changed': False, 'results': [], 'rc': 0 } # Accumulate failures. Package management modules install what they can # and fail with a message about what they can't. failure_response = { 'msg': "", 'failures': [], 'results': [], 'rc': 1 } # Autoremove is called alone # Jump to remove path where base.autoremove() is run if not self.names and self.autoremove: self.names = [] self.state = 'absent' if self.names == ['*'] and self.state == 'latest': try: self.base.upgrade_all() except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages" self.module.fail_json(**failure_response) else: pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file() pkg_specs = [p.strip() for p in pkg_specs] filenames = [f.strip() for f in filenames] groups = [] environments = [] for group_spec in (g.strip() for g in group_specs): group = self.base.comps.group_by_pattern(group_spec) if group: groups.append(group.id) else: environment = self.base.comps.environment_by_pattern(group_spec) if environment: environments.append(environment.id) else: self.module.fail_json( msg="No group {0} available.".format(group_spec), results=[], ) if self.state in ['installed', 'present']: # Install files. self._install_remote_rpms(filenames) for filename in filenames: response['results'].append("Installed {0}".format(filename)) # Install modules if module_specs and self.with_modules: for module in module_specs: try: if not self._is_module_installed(module): response['results'].append("Module {0} installed.".format(module)) self.module_base.install([module]) self.module_base.enable([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) # Install groups. for group in groups: try: group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES) if group_pkg_count_installed == 0: response['results'].append("Group {0} already installed.".format(group)) else: response['results'].append("Group {0} installed.".format(group)) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: # In dnf 2.0 if all the mandatory packages in a group do # not install, an error is raised. We want to capture # this but still install as much as possible. failure_response['failures'].append(" ".join((group, to_native(e)))) for environment in environments: try: self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((environment, to_native(e)))) if module_specs and not self.with_modules: # This means that the group or env wasn't found in comps self.module.fail_json( msg="No group {0} available.".format(module_specs[0]), results=[], ) # Install packages. if self.update_only: not_installed = self._update_only(pkg_specs) for spec in not_installed: response['results'].append("Packages providing %s not installed due to update_only specified" % spec) else: for pkg_spec in pkg_specs: install_result = self._mark_package_install(pkg_spec) if install_result['failed']: if install_result['msg']: failure_response['msg'] += install_result['msg'] failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure'])) else: if install_result['msg']: response['results'].append(install_result['msg']) elif self.state == 'latest': # "latest" is same as "installed" for filenames. self._install_remote_rpms(filenames) for filename in filenames: response['results'].append("Installed {0}".format(filename)) # Upgrade modules if module_specs and self.with_modules: for module in module_specs: try: if self._is_module_installed(module): response['results'].append("Module {0} upgraded.".format(module)) self.module_base.upgrade([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) for group in groups: try: try: self.base.group_upgrade(group) response['results'].append("Group {0} upgraded.".format(group)) except dnf.exceptions.CompsError: if not self.update_only: # If not already installed, try to install. group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES) if group_pkg_count_installed == 0: response['results'].append("Group {0} already installed.".format(group)) else: response['results'].append("Group {0} installed.".format(group)) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((group, to_native(e)))) for environment in environments: try: try: self.base.environment_upgrade(environment) except dnf.exceptions.CompsError: # If not already installed, try to install. self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment) except dnf.exceptions.Error as e: failure_response['failures'].append(" ".join((environment, to_native(e)))) if self.update_only: not_installed = self._update_only(pkg_specs) for spec in not_installed: response['results'].append("Packages providing %s not installed due to update_only specified" % spec) else: for pkg_spec in pkg_specs: # best effort causes to install the latest package # even if not previously installed self.base.conf.best = True install_result = self._mark_package_install(pkg_spec, upgrade=True) if install_result['failed']: if install_result['msg']: failure_response['msg'] += install_result['msg'] failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure'])) else: if install_result['msg']: response['results'].append(install_result['msg']) else: # state == absent if filenames: self.module.fail_json( msg="Cannot remove paths -- please specify package name.", results=[], ) # Remove modules if module_specs and self.with_modules: for module in module_specs: try: if self._is_module_installed(module): response['results'].append("Module {0} removed.".format(module)) self.module_base.remove([module]) self.module_base.disable([module]) self.module_base.reset([module]) except dnf.exceptions.MarkingErrors as e: failure_response['failures'].append(' '.join((module, to_native(e)))) for group in groups: try: self.base.group_remove(group) except dnf.exceptions.CompsError: # Group is already uninstalled. pass except AttributeError: # Group either isn't installed or wasn't marked installed at install time # because of DNF bug # # This is necessary until the upstream dnf API bug is fixed where installing # a group via the dnf API doesn't actually mark the group as installed # https://bugzilla.redhat.com/show_bug.cgi?id=1620324 pass for environment in environments: try: self.base.environment_remove(environment) except dnf.exceptions.CompsError: # Environment is already uninstalled. pass installed = self.base.sack.query().installed() for pkg_spec in pkg_specs: # short-circuit installed check for wildcard matching if '*' in pkg_spec: try: self.base.remove(pkg_spec) except dnf.exceptions.MarkingError as e: is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e)) if is_failure: failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e))) else: response['results'].append(handled_remove_error) continue installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query( sack=self.base.sack).installed().run() for pkg in installed_pkg: self.base.remove(str(pkg)) # Like the dnf CLI we want to allow recursive removal of dependent # packages self.allowerasing = True if self.autoremove: self.base.autoremove() try: if not self.base.resolve(allow_erasing=self.allowerasing): if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.fail_json(**failure_response) response['msg'] = "Nothing to do" self.module.exit_json(**response) else: response['changed'] = True # If packages got installed/removed, add them to the results. # We do this early so we can use it for both check_mode and not. if self.download_only: install_action = 'Downloaded' else: install_action = 'Installed' for package in self.base.transaction.install_set: response['results'].append("{0}: {1}".format(install_action, package)) for package in self.base.transaction.remove_set: response['results'].append("Removed: {0}".format(package)) if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.fail_json(**failure_response) if self.module.check_mode: response['msg'] = "Check mode: No changes made, but would have if not in check mode" self.module.exit_json(**response) try: if self.download_only and self.download_dir and self.base.conf.destdir: dnf.util.ensure_dir(self.base.conf.destdir) self.base.repos.all().pkgdir = self.base.conf.destdir self.base.download_packages(self.base.transaction.install_set) except dnf.exceptions.DownloadError as e: self.module.fail_json( msg="Failed to download packages: {0}".format(to_text(e)), results=[], ) # Validate GPG. This is NOT done in dnf.Base (it's done in the # upstream CLI subclass of dnf.Base) if not self.disable_gpg_check: for package in self.base.transaction.install_set: fail = False gpgres, gpgerr = self.base._sig_check_pkg(package) if gpgres == 0: # validated successfully continue elif gpgres == 1: # validation failed, install cert? try: self.base._get_key_for_package(package) except dnf.exceptions.Error as e: fail = True else: # fatal error fail = True if fail: msg = 'Failed to validate GPG signature for {0}'.format(package) self.module.fail_json(msg) if self.download_only: # No further work left to do, and the results were already updated above. # Just return them. self.module.exit_json(**response) else: self.base.do_transaction() if failure_response['failures']: failure_response['msg'] = 'Failed to install some of the specified packages' self.module.exit_json(**response) self.module.exit_json(**response) except dnf.exceptions.DepsolveError as e: failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e)) self.module.fail_json(**failure_response) except dnf.exceptions.Error as e: if to_text("already installed") in to_text(e): response['changed'] = False response['results'].append("Package already installed: {0}".format(to_native(e))) self.module.exit_json(**response) else: failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e)) self.module.fail_json(**failure_response) @staticmethod def has_dnf(): return HAS_DNF def run(self): """The main function.""" # Check if autoremove is called correctly if self.autoremove: if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'): self.module.fail_json( msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__, results=[], ) # Check if download_dir is called correctly if self.download_dir: if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'): self.module.fail_json( msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__, results=[], ) if self.update_cache and not self.names and not self.list: self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot ) self.module.exit_json( msg="Cache updated", changed=False, results=[], rc=0 ) # Set state as installed by default # This is not set in AnsibleModule() because the following shouldn't happen # - dnf: autoremove=yes state=installed if self.state is None: self.state = 'installed' if self.list: self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot ) self.list_items(self.list) else: # Note: base takes a long time to run so we want to check for failure # before running it. if not dnf.util.am_i_root(): self.module.fail_json( msg="This command has to be run under the root user.", results=[], ) self.base = self._base( self.conf_file, self.disable_gpg_check, self.disablerepo, self.enablerepo, self.installroot ) if self.with_modules: self.module_base = dnf.module.module_base.ModuleBase(self.base) self.ensure() def main(): # state=installed name=pkgspec # state=removed name=pkgspec # state=latest name=pkgspec # # informational commands: # list=installed # list=updates # list=available # list=repos # list=pkgspec # Extend yumdnf_argument_spec with dnf-specific features that will never be # backported to yum because yum is now in "maintenance mode" upstream yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool') yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool') module = AnsibleModule( **yumdnf_argument_spec ) module_implementation = DnfModule(module) try: module_implementation.run() except dnf.exceptions.RepoError as de: module.fail_json( msg="Failed to synchronize repodata: {0}".format(to_native(de)), rc=1, results=[], changed=False ) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
74,507
service: daemon-reload not documented
### Summary `daemon-reload` in `service` is not documented but apparently works and has an impact, at least on Ubuntu (tested: 20.04), where it operates on `systemd` internally. ### Issue Type Documentation Report ### Component Name lib/ansible/modules/service.py ### Ansible Version ```console $ ansible --version ansible 2.10.8 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/peter/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/peter/test/lib/python3.9/site-packages/ansible executable location = /home/peter/test/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] $ ``` ### Configuration ```console $ ansible-config dump --only-changed $ ``` ### OS / Environment Controller: ``` $ uname -a Linux ws-arch-tux 5.10.29-1-lts #1 SMP Sat, 10 Apr 2021 14:40:41 +0000 x86_64 GNU/Linux $ ``` Target: ``` $ uname -a Linux instance 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal $ ``` ### Additional Information If `daemon-reload` is not specified, its default behaviour seems to be `false`. In this state `service` does not trigger a daemon-reload, which is needed when the unit file changed and the service was running from the old one and needs to be restartet. If `daemon-reload` is set to `true`, a daemon-reload seems to be only executed if it is needed for the service and state specified. TODO: - verify impact of `daemon-reload` on systems using systemd - impact on systems using other subsystems - document the property in `ansible.builtin.service` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74507
https://github.com/ansible/ansible/pull/74726
9c01a7e1776cfe486c40b945414f788112fe0df8
27f61db86b69743181529dd6ee34951b244e075e
2021-04-30T09:48:16Z
python
2021-05-25T15:25:21Z
changelogs/fragments/74507_service.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,507
service: daemon-reload not documented
### Summary `daemon-reload` in `service` is not documented but apparently works and has an impact, at least on Ubuntu (tested: 20.04), where it operates on `systemd` internally. ### Issue Type Documentation Report ### Component Name lib/ansible/modules/service.py ### Ansible Version ```console $ ansible --version ansible 2.10.8 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/peter/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/peter/test/lib/python3.9/site-packages/ansible executable location = /home/peter/test/bin/ansible python version = 3.9.4 (default, Apr 20 2021, 15:51:38) [GCC 10.2.0] $ ``` ### Configuration ```console $ ansible-config dump --only-changed $ ``` ### OS / Environment Controller: ``` $ uname -a Linux ws-arch-tux 5.10.29-1-lts #1 SMP Sat, 10 Apr 2021 14:40:41 +0000 x86_64 GNU/Linux $ ``` Target: ``` $ uname -a Linux instance 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal $ ``` ### Additional Information If `daemon-reload` is not specified, its default behaviour seems to be `false`. In this state `service` does not trigger a daemon-reload, which is needed when the unit file changed and the service was running from the old one and needs to be restartet. If `daemon-reload` is set to `true`, a daemon-reload seems to be only executed if it is needed for the service and state specified. TODO: - verify impact of `daemon-reload` on systems using systemd - impact on systems using other subsystems - document the property in `ansible.builtin.service` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
https://github.com/ansible/ansible/issues/74507
https://github.com/ansible/ansible/pull/74726
9c01a7e1776cfe486c40b945414f788112fe0df8
27f61db86b69743181529dd6ee34951b244e075e
2021-04-30T09:48:16Z
python
2021-05-25T15:25:21Z
lib/ansible/modules/service.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2012, Michael DeHaan <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type DOCUMENTATION = r''' --- module: service version_added: "0.1" short_description: Manage services description: - Controls services on remote hosts. Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart. - For Windows targets, use the M(ansible.windows.win_service) module instead. options: name: description: - Name of the service. type: str required: true state: description: - C(started)/C(stopped) are idempotent actions that will not run commands unless necessary. - C(restarted) will always bounce the service. - C(reloaded) will always reload. - B(At least one of state and enabled are required.) - Note that reloaded will start the service if it is not already started, even if your chosen init system wouldn't normally. type: str choices: [ reloaded, restarted, started, stopped ] sleep: description: - If the service is being C(restarted) then sleep this many seconds between the stop and start command. - This helps to work around badly-behaving init scripts that exit immediately after signaling a process to stop. - Not all service managers support sleep, i.e when using systemd this setting will be ignored. type: int version_added: "1.3" pattern: description: - If the service does not respond to the status command, name a substring to look for as would be found in the output of the I(ps) command as a stand-in for a status result. - If the string is found, the service will be assumed to be started. - While using remote hosts with systemd this setting will be ignored. type: str version_added: "0.7" enabled: description: - Whether the service should start on boot. - B(At least one of state and enabled are required.) type: bool runlevel: description: - For OpenRC init scripts (e.g. Gentoo) only. - The runlevel that this service belongs to. - While using remote hosts with systemd this setting will be ignored. type: str default: default arguments: description: - Additional arguments provided on the command line. - While using remote hosts with systemd this setting will be ignored. type: str aliases: [ args ] use: description: - The service module actually uses system specific modules, normally through auto detection, this setting can force a specific module. - Normally it uses the value of the 'ansible_service_mgr' fact and falls back to the old 'service' module when none matching is found. type: str default: auto version_added: 2.2 notes: - For AIX, group subsystem names can be used. - Supports C(check_mode). seealso: - module: ansible.windows.win_service author: - Ansible Core Team - Michael DeHaan ''' EXAMPLES = r''' - name: Start service httpd, if not started ansible.builtin.service: name: httpd state: started - name: Stop service httpd, if started ansible.builtin.service: name: httpd state: stopped - name: Restart service httpd, in all cases ansible.builtin.service: name: httpd state: restarted - name: Reload service httpd, in all cases ansible.builtin.service: name: httpd state: reloaded - name: Enable service httpd, and not touch the state ansible.builtin.service: name: httpd enabled: yes - name: Start service foo, based on running process /usr/bin/foo ansible.builtin.service: name: foo pattern: /usr/bin/foo state: started - name: Restart network service for interface eth0 ansible.builtin.service: name: network state: restarted args: eth0 ''' RETURN = r'''#''' import glob import json import os import platform import re import select import shlex import subprocess import tempfile import time # The distutils module is not shipped with SUNWPython on Solaris. # It's in the SUNWPython-devel package which also contains development files # that don't belong on production boxes. Since our Solaris code doesn't # depend on LooseVersion, do not import it on Solaris. if platform.system() != 'SunOS': from ansible.module_utils.compat.version import LooseVersion from ansible.module_utils._text import to_bytes, to_text from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.sys_info import get_platform_subclass from ansible.module_utils.service import fail_if_missing from ansible.module_utils.six import PY2, b class Service(object): """ This is the generic Service manipulation class that is subclassed based on platform. A subclass should override the following action methods:- - get_service_tools - service_enable - get_service_status - service_control All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None def __new__(cls, *args, **kwargs): new_cls = get_platform_subclass(Service) return super(cls, new_cls).__new__(new_cls) def __init__(self, module): self.module = module self.name = module.params['name'] self.state = module.params['state'] self.sleep = module.params['sleep'] self.pattern = module.params['pattern'] self.enable = module.params['enabled'] self.runlevel = module.params['runlevel'] self.changed = False self.running = None self.crashed = None self.action = None self.svc_cmd = None self.svc_initscript = None self.svc_initctl = None self.enable_cmd = None self.arguments = module.params.get('arguments', '') self.rcconf_file = None self.rcconf_key = None self.rcconf_value = None self.svc_change = False # =========================================== # Platform specific methods (must be replaced by subclass). def get_service_tools(self): self.module.fail_json(msg="get_service_tools not implemented on target platform") def service_enable(self): self.module.fail_json(msg="service_enable not implemented on target platform") def get_service_status(self): self.module.fail_json(msg="get_service_status not implemented on target platform") def service_control(self): self.module.fail_json(msg="service_control not implemented on target platform") # =========================================== # Generic methods that should be used on all platforms. def execute_command(self, cmd, daemonize=False): # Most things don't need to be daemonized if not daemonize: # chkconfig localizes messages and we're screen scraping so make # sure we use the C locale lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C') return self.module.run_command(cmd, environ_update=lang_env) # This is complex because daemonization is hard for people. # What we do is daemonize a part of this module, the daemon runs the # command, picks up the return code and output, and returns it to the # main process. pipe = os.pipe() pid = os.fork() if pid == 0: os.close(pipe[0]) # Set stdin/stdout/stderr to /dev/null fd = os.open(os.devnull, os.O_RDWR) if fd != 0: os.dup2(fd, 0) if fd != 1: os.dup2(fd, 1) if fd != 2: os.dup2(fd, 2) if fd not in (0, 1, 2): os.close(fd) # Make us a daemon. Yes, that's all it takes. pid = os.fork() if pid > 0: os._exit(0) os.setsid() os.chdir("/") pid = os.fork() if pid > 0: os._exit(0) # Start the command if PY2: # Python 2.6's shlex.split can't handle text strings correctly cmd = to_bytes(cmd, errors='surrogate_or_strict') cmd = shlex.split(cmd) else: # Python3.x shex.split text strings. cmd = to_text(cmd, errors='surrogate_or_strict') cmd = [to_bytes(c, errors='surrogate_or_strict') for c in shlex.split(cmd)] # In either of the above cases, pass a list of byte strings to Popen # chkconfig localizes messages and we're screen scraping so make # sure we use the C locale lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C') p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=lang_env, preexec_fn=lambda: os.close(pipe[1])) stdout = b("") stderr = b("") fds = [p.stdout, p.stderr] # Wait for all output, or until the main process is dead and its output is done. while fds: rfd, wfd, efd = select.select(fds, [], fds, 1) if not (rfd + wfd + efd) and p.poll() is not None: break if p.stdout in rfd: dat = os.read(p.stdout.fileno(), 4096) if not dat: fds.remove(p.stdout) stdout += dat if p.stderr in rfd: dat = os.read(p.stderr.fileno(), 4096) if not dat: fds.remove(p.stderr) stderr += dat p.wait() # Return a JSON blob to parent blob = json.dumps([p.returncode, to_text(stdout), to_text(stderr)]) os.write(pipe[1], to_bytes(blob, errors='surrogate_or_strict')) os.close(pipe[1]) os._exit(0) elif pid == -1: self.module.fail_json(msg="unable to fork") else: os.close(pipe[1]) os.waitpid(pid, 0) # Wait for data from daemon process and process it. data = b("") while True: rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]]) if pipe[0] in rfd: dat = os.read(pipe[0], 4096) if not dat: break data += dat return json.loads(to_text(data, errors='surrogate_or_strict')) def check_ps(self): # Set ps flags if platform.system() == 'SunOS': psflags = '-ef' else: psflags = 'auxww' # Find ps binary psbin = self.module.get_bin_path('ps', True) (rc, psout, pserr) = self.execute_command('%s %s' % (psbin, psflags)) # If rc is 0, set running as appropriate if rc == 0: self.running = False lines = psout.split("\n") for line in lines: if self.pattern in line and "pattern=" not in line: # so as to not confuse ./hacking/test-module.py self.running = True break def check_service_changed(self): if self.state and self.running is None: self.module.fail_json(msg="failed determining service state, possible typo of service name?") # Find out if state has changed if not self.running and self.state in ["reloaded", "started"]: self.svc_change = True elif self.running and self.state in ["reloaded", "stopped"]: self.svc_change = True elif self.state == "restarted": self.svc_change = True if self.module.check_mode and self.svc_change: self.module.exit_json(changed=True, msg='service state changed') def modify_service_state(self): # Only do something if state will change if self.svc_change: # Control service if self.state in ['started']: self.action = "start" elif not self.running and self.state == 'reloaded': self.action = "start" elif self.state == 'stopped': self.action = "stop" elif self.state == 'reloaded': self.action = "reload" elif self.state == 'restarted': self.action = "restart" if self.module.check_mode: self.module.exit_json(changed=True, msg='changing service state') return self.service_control() else: # If nothing needs to change just say all is well rc = 0 err = '' out = '' return rc, out, err def service_enable_rcconf(self): if self.rcconf_file is None or self.rcconf_key is None or self.rcconf_value is None: self.module.fail_json(msg="service_enable_rcconf() requires rcconf_file, rcconf_key and rcconf_value") self.changed = None entry = '%s="%s"\n' % (self.rcconf_key, self.rcconf_value) RCFILE = open(self.rcconf_file, "r") new_rc_conf = [] # Build a list containing the possibly modified file. for rcline in RCFILE: # Parse line removing whitespaces, quotes, etc. rcarray = shlex.split(rcline, comments=True) if len(rcarray) >= 1 and '=' in rcarray[0]: (key, value) = rcarray[0].split("=", 1) if key == self.rcconf_key: if value.upper() == self.rcconf_value: # Since the proper entry already exists we can stop iterating. self.changed = False break else: # We found the key but the value is wrong, replace with new entry. rcline = entry self.changed = True # Add line to the list. new_rc_conf.append(rcline.strip() + '\n') # We are done with reading the current rc.conf, close it. RCFILE.close() # If we did not see any trace of our entry we need to add it. if self.changed is None: new_rc_conf.append(entry) self.changed = True if self.changed is True: if self.module.check_mode: self.module.exit_json(changed=True, msg="changing service enablement") # Create a temporary file next to the current rc.conf (so we stay on the same filesystem). # This way the replacement operation is atomic. rcconf_dir = os.path.dirname(self.rcconf_file) rcconf_base = os.path.basename(self.rcconf_file) (TMP_RCCONF, tmp_rcconf_file) = tempfile.mkstemp(dir=rcconf_dir, prefix="%s-" % rcconf_base) # Write out the contents of the list into our temporary file. for rcline in new_rc_conf: os.write(TMP_RCCONF, rcline.encode()) # Close temporary file. os.close(TMP_RCCONF) # Replace previous rc.conf. self.module.atomic_move(tmp_rcconf_file, self.rcconf_file) class LinuxService(Service): """ This is the Linux Service manipulation class - it is currently supporting a mixture of binaries and init scripts for controlling services started at boot, as well as for controlling the current state. """ platform = 'Linux' distribution = None def get_service_tools(self): paths = ['/sbin', '/usr/sbin', '/bin', '/usr/bin'] binaries = ['service', 'chkconfig', 'update-rc.d', 'rc-service', 'rc-update', 'initctl', 'systemctl', 'start', 'stop', 'restart', 'insserv'] initpaths = ['/etc/init.d'] location = dict() for binary in binaries: location[binary] = self.module.get_bin_path(binary, opt_dirs=paths) for initdir in initpaths: initscript = "%s/%s" % (initdir, self.name) if os.path.isfile(initscript): self.svc_initscript = initscript def check_systemd(): # tools must be installed if location.get('systemctl', False): # this should show if systemd is the boot init system # these mirror systemd's own sd_boot test http://www.freedesktop.org/software/systemd/man/sd_booted.html for canary in ["/run/systemd/system/", "/dev/.run/systemd/", "/dev/.systemd/"]: if os.path.exists(canary): return True # If all else fails, check if init is the systemd command, using comm as cmdline could be symlink try: f = open('/proc/1/comm', 'r') except IOError: # If comm doesn't exist, old kernel, no systemd return False for line in f: if 'systemd' in line: return True return False # Locate a tool to enable/disable a service if check_systemd(): # service is managed by systemd self.__systemd_unit = self.name self.svc_cmd = location['systemctl'] self.enable_cmd = location['systemctl'] elif location.get('initctl', False) and os.path.exists("/etc/init/%s.conf" % self.name): # service is managed by upstart self.enable_cmd = location['initctl'] # set the upstart version based on the output of 'initctl version' self.upstart_version = LooseVersion('0.0.0') try: version_re = re.compile(r'\(upstart (.*)\)') rc, stdout, stderr = self.module.run_command('%s version' % location['initctl']) if rc == 0: res = version_re.search(stdout) if res: self.upstart_version = LooseVersion(res.groups()[0]) except Exception: pass # we'll use the default of 0.0.0 self.svc_cmd = location['initctl'] elif location.get('rc-service', False): # service is managed by OpenRC self.svc_cmd = location['rc-service'] self.enable_cmd = location['rc-update'] return # already have service start/stop tool too! elif self.svc_initscript: # service is managed by with SysV init scripts if location.get('update-rc.d', False): # and uses update-rc.d self.enable_cmd = location['update-rc.d'] elif location.get('insserv', None): # and uses insserv self.enable_cmd = location['insserv'] elif location.get('chkconfig', False): # and uses chkconfig self.enable_cmd = location['chkconfig'] if self.enable_cmd is None: fail_if_missing(self.module, False, self.name, msg='host') # If no service control tool selected yet, try to see if 'service' is available if self.svc_cmd is None and location.get('service', False): self.svc_cmd = location['service'] # couldn't find anything yet if self.svc_cmd is None and not self.svc_initscript: self.module.fail_json(msg='cannot find \'service\' binary or init script for service, possible typo in service name?, aborting') if location.get('initctl', False): self.svc_initctl = location['initctl'] def get_systemd_service_enabled(self): def sysv_exists(name): script = '/etc/init.d/' + name return os.access(script, os.X_OK) def sysv_is_enabled(name): return bool(glob.glob('/etc/rc?.d/S??' + name)) service_name = self.__systemd_unit (rc, out, err) = self.execute_command("%s is-enabled %s" % (self.enable_cmd, service_name,)) if rc == 0: return True elif out.startswith('disabled'): return False elif sysv_exists(service_name): return sysv_is_enabled(service_name) else: return False def get_systemd_status_dict(self): # Check status first as show will not fail if service does not exist (rc, out, err) = self.execute_command("%s show '%s'" % (self.enable_cmd, self.__systemd_unit,)) if rc != 0: self.module.fail_json(msg='failure %d running systemctl show for %r: %s' % (rc, self.__systemd_unit, err)) elif 'LoadState=not-found' in out: self.module.fail_json(msg='systemd could not find the requested service "%r": %s' % (self.__systemd_unit, err)) key = None value_buffer = [] status_dict = {} for line in out.splitlines(): if '=' in line: if not key: key, value = line.split('=', 1) # systemd fields that are shell commands can be multi-line # We take a value that begins with a "{" as the start of # a shell command and a line that ends with "}" as the end of # the command if value.lstrip().startswith('{'): if value.rstrip().endswith('}'): status_dict[key] = value key = None else: value_buffer.append(value) else: status_dict[key] = value key = None else: if line.rstrip().endswith('}'): status_dict[key] = '\n'.join(value_buffer) key = None else: value_buffer.append(value) else: value_buffer.append(value) return status_dict def get_systemd_service_status(self): d = self.get_systemd_status_dict() if d.get('ActiveState') == 'active': # run-once services (for which a single successful exit indicates # that they are running as designed) should not be restarted here. # Thus, we are not checking d['SubState']. self.running = True self.crashed = False elif d.get('ActiveState') == 'failed': self.running = False self.crashed = True elif d.get('ActiveState') is None: self.module.fail_json(msg='No ActiveState value in systemctl show output for %r' % (self.__systemd_unit,)) else: self.running = False self.crashed = False return self.running def get_service_status(self): if self.svc_cmd and self.svc_cmd.endswith('systemctl'): return self.get_systemd_service_status() self.action = "status" rc, status_stdout, status_stderr = self.service_control() # if we have decided the service is managed by upstart, we check for some additional output... if self.svc_initctl and self.running is None: # check the job status by upstart response initctl_rc, initctl_status_stdout, initctl_status_stderr = self.execute_command("%s status %s %s" % (self.svc_initctl, self.name, self.arguments)) if "stop/waiting" in initctl_status_stdout: self.running = False elif "start/running" in initctl_status_stdout: self.running = True if self.svc_cmd and self.svc_cmd.endswith("rc-service") and self.running is None: openrc_rc, openrc_status_stdout, openrc_status_stderr = self.execute_command("%s %s status" % (self.svc_cmd, self.name)) self.running = "started" in openrc_status_stdout self.crashed = "crashed" in openrc_status_stderr # Prefer a non-zero return code. For reference, see: # http://refspecs.linuxbase.org/LSB_4.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html if self.running is None and rc in [1, 2, 3, 4, 69]: self.running = False # if the job status is still not known check it by status output keywords # Only check keywords if there's only one line of output (some init # scripts will output verbosely in case of error and those can emit # keywords that are picked up as false positives if self.running is None and status_stdout.count('\n') <= 1: # first transform the status output that could irritate keyword matching cleanout = status_stdout.lower().replace(self.name.lower(), '') if "stop" in cleanout: self.running = False elif "run" in cleanout: self.running = not ("not " in cleanout) elif "start" in cleanout and "not " not in cleanout: self.running = True elif 'could not access pid file' in cleanout: self.running = False elif 'is dead and pid file exists' in cleanout: self.running = False elif 'dead but subsys locked' in cleanout: self.running = False elif 'dead but pid file exists' in cleanout: self.running = False # if the job status is still not known and we got a zero for the # return code, assume here that the service is running if self.running is None and rc == 0: self.running = True # if the job status is still not known check it by special conditions if self.running is None: if self.name == 'iptables' and "ACCEPT" in status_stdout: # iptables status command output is lame # TODO: lookup if we can use a return code for this instead? self.running = True return self.running def service_enable(self): if self.enable_cmd is None: self.module.fail_json(msg='cannot detect command to enable service %s, typo or init system potentially unknown' % self.name) self.changed = True action = None # # Upstart's initctl # if self.enable_cmd.endswith("initctl"): def write_to_override_file(file_name, file_contents, ): override_file = open(file_name, 'w') override_file.write(file_contents) override_file.close() initpath = '/etc/init' if self.upstart_version >= LooseVersion('0.6.7'): manreg = re.compile(r'^manual\s*$', re.M | re.I) config_line = 'manual\n' else: manreg = re.compile(r'^start on manual\s*$', re.M | re.I) config_line = 'start on manual\n' conf_file_name = "%s/%s.conf" % (initpath, self.name) override_file_name = "%s/%s.override" % (initpath, self.name) # Check to see if files contain the manual line in .conf and fail if True with open(conf_file_name) as conf_file_fh: conf_file_content = conf_file_fh.read() if manreg.search(conf_file_content): self.module.fail_json(msg="manual stanza not supported in a .conf file") self.changed = False if os.path.exists(override_file_name): with open(override_file_name) as override_fh: override_file_contents = override_fh.read() # Remove manual stanza if present and service enabled if self.enable and manreg.search(override_file_contents): self.changed = True override_state = manreg.sub('', override_file_contents) # Add manual stanza if not present and service disabled elif not (self.enable) and not (manreg.search(override_file_contents)): self.changed = True override_state = '\n'.join((override_file_contents, config_line)) # service already in desired state else: pass # Add file with manual stanza if service disabled elif not (self.enable): self.changed = True override_state = config_line else: # service already in desired state pass if self.module.check_mode: self.module.exit_json(changed=self.changed) # The initctl method of enabling and disabling services is much # different than for the other service methods. So actually # committing the change is done in this conditional and then we # skip the boilerplate at the bottom of the method if self.changed: try: write_to_override_file(override_file_name, override_state) except Exception: self.module.fail_json(msg='Could not modify override file') return # # SysV's chkconfig # if self.enable_cmd.endswith("chkconfig"): if self.enable: action = 'on' else: action = 'off' (rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name)) if 'chkconfig --add %s' % self.name in err: self.execute_command("%s --add %s" % (self.enable_cmd, self.name)) (rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name)) if self.name not in out: self.module.fail_json(msg="service %s does not support chkconfig" % self.name) # TODO: look back on why this is here # state = out.split()[-1] # Check if we're already in the correct state if "3:%s" % action in out and "5:%s" % action in out: self.changed = False return # # Systemd's systemctl # if self.enable_cmd.endswith("systemctl"): if self.enable: action = 'enable' else: action = 'disable' # Check if we're already in the correct state service_enabled = self.get_systemd_service_enabled() # self.changed should already be true if self.enable == service_enabled: self.changed = False return # # OpenRC's rc-update # if self.enable_cmd.endswith("rc-update"): if self.enable: action = 'add' else: action = 'delete' (rc, out, err) = self.execute_command("%s show" % self.enable_cmd) for line in out.splitlines(): service_name, runlevels = line.split('|') service_name = service_name.strip() if service_name != self.name: continue runlevels = re.split(r'\s+', runlevels) # service already enabled for the runlevel if self.enable and self.runlevel in runlevels: self.changed = False # service already disabled for the runlevel elif not self.enable and self.runlevel not in runlevels: self.changed = False break else: # service already disabled altogether if not self.enable: self.changed = False if not self.changed: return # # update-rc.d style # if self.enable_cmd.endswith("update-rc.d"): enabled = False slinks = glob.glob('/etc/rc?.d/S??' + self.name) if slinks: enabled = True if self.enable != enabled: self.changed = True if self.enable: action = 'enable' klinks = glob.glob('/etc/rc?.d/K??' + self.name) if not klinks: if not self.module.check_mode: (rc, out, err) = self.execute_command("%s %s defaults" % (self.enable_cmd, self.name)) if rc != 0: if err: self.module.fail_json(msg=err) else: self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action) else: action = 'disable' if not self.module.check_mode: (rc, out, err) = self.execute_command("%s %s %s" % (self.enable_cmd, self.name, action)) if rc != 0: if err: self.module.fail_json(msg=err) else: self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action) else: self.changed = False return # # insserv (Debian <=7, SLES, others) # if self.enable_cmd.endswith("insserv"): if self.enable: (rc, out, err) = self.execute_command("%s -n -v %s" % (self.enable_cmd, self.name)) else: (rc, out, err) = self.execute_command("%s -n -r -v %s" % (self.enable_cmd, self.name)) self.changed = False for line in err.splitlines(): if self.enable and line.find('enable service') != -1: self.changed = True break if not self.enable and line.find('remove service') != -1: self.changed = True break if self.module.check_mode: self.module.exit_json(changed=self.changed) if not self.changed: return if self.enable: (rc, out, err) = self.execute_command("%s %s" % (self.enable_cmd, self.name)) if (rc != 0) or (err != ''): self.module.fail_json(msg=("Failed to install service. rc: %s, out: %s, err: %s" % (rc, out, err))) return (rc, out, err) else: (rc, out, err) = self.execute_command("%s -r %s" % (self.enable_cmd, self.name)) if (rc != 0) or (err != ''): self.module.fail_json(msg=("Failed to remove service. rc: %s, out: %s, err: %s" % (rc, out, err))) return (rc, out, err) # # If we've gotten to the end, the service needs to be updated # self.changed = True # we change argument order depending on real binary used: # rc-update and systemctl need the argument order reversed if self.enable_cmd.endswith("rc-update"): args = (self.enable_cmd, action, self.name + " " + self.runlevel) elif self.enable_cmd.endswith("systemctl"): args = (self.enable_cmd, action, self.__systemd_unit) else: args = (self.enable_cmd, self.name, action) if self.module.check_mode: self.module.exit_json(changed=self.changed) (rc, out, err) = self.execute_command("%s %s %s" % args) if rc != 0: if err: self.module.fail_json(msg="Error when trying to %s %s: rc=%s %s" % (action, self.name, rc, err)) else: self.module.fail_json(msg="Failure for %s %s: rc=%s %s" % (action, self.name, rc, out)) return (rc, out, err) def service_control(self): # Decide what command to run svc_cmd = '' arguments = self.arguments if self.svc_cmd: if not self.svc_cmd.endswith("systemctl"): if self.svc_cmd.endswith("initctl"): # initctl commands take the form <cmd> <action> <name> svc_cmd = self.svc_cmd arguments = "%s %s" % (self.name, arguments) else: # SysV and OpenRC take the form <cmd> <name> <action> svc_cmd = "%s %s" % (self.svc_cmd, self.name) else: # systemd commands take the form <cmd> <action> <name> svc_cmd = self.svc_cmd arguments = "%s %s" % (self.__systemd_unit, arguments) elif self.svc_cmd is None and self.svc_initscript: # upstart svc_cmd = "%s" % self.svc_initscript # In OpenRC, if a service crashed, we need to reset its status to # stopped with the zap command, before we can start it back. if self.svc_cmd and self.svc_cmd.endswith('rc-service') and self.action == 'start' and self.crashed: self.execute_command("%s zap" % svc_cmd, daemonize=True) if self.action != "restart": if svc_cmd != '': # upstart or systemd or OpenRC rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True) else: # SysV rc_state, stdout, stderr = self.execute_command("%s %s %s" % (self.action, self.name, arguments), daemonize=True) elif self.svc_cmd and self.svc_cmd.endswith('rc-service'): # All services in OpenRC support restart. rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True) else: # In other systems, not all services support restart. Do it the hard way. if svc_cmd != '': # upstart or systemd rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % (svc_cmd, 'stop', arguments), daemonize=True) else: # SysV rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % ('stop', self.name, arguments), daemonize=True) if self.sleep: time.sleep(self.sleep) if svc_cmd != '': # upstart or systemd rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % (svc_cmd, 'start', arguments), daemonize=True) else: # SysV rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % ('start', self.name, arguments), daemonize=True) # merge return information if rc1 != 0 and rc2 == 0: rc_state = rc2 stdout = stdout2 stderr = stderr2 else: rc_state = rc1 + rc2 stdout = stdout1 + stdout2 stderr = stderr1 + stderr2 return (rc_state, stdout, stderr) class FreeBsdService(Service): """ This is the FreeBSD Service manipulation class - it uses the /etc/rc.conf file for controlling services started at boot and the 'service' binary to check status and perform direct service manipulation. """ platform = 'FreeBSD' distribution = None def get_service_tools(self): self.svc_cmd = self.module.get_bin_path('service', True) if not self.svc_cmd: self.module.fail_json(msg='unable to find service binary') self.sysrc_cmd = self.module.get_bin_path('sysrc') def get_service_status(self): rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'onestatus', self.arguments)) if self.name == "pf": self.running = "Enabled" in stdout else: if rc == 1: self.running = False elif rc == 0: self.running = True def service_enable(self): if self.enable: self.rcconf_value = "YES" else: self.rcconf_value = "NO" rcfiles = ['/etc/rc.conf', '/etc/rc.conf.local', '/usr/local/etc/rc.conf'] for rcfile in rcfiles: if os.path.isfile(rcfile): self.rcconf_file = rcfile rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'rcvar', self.arguments)) try: rcvars = shlex.split(stdout, comments=True) except Exception: # TODO: add a warning to the output with the failure pass if not rcvars: self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr) # In rare cases, i.e. sendmail, rcvar can return several key=value pairs # Usually there is just one, however. In other rare cases, i.e. uwsgi, # rcvar can return extra uncommented data that is not at all related to # the rcvar. We will just take the first key=value pair we come across # and hope for the best. for rcvar in rcvars: if '=' in rcvar: self.rcconf_key, default_rcconf_value = rcvar.split('=', 1) break if self.rcconf_key is None: self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr) if self.sysrc_cmd: # FreeBSD >= 9.2 rc, current_rcconf_value, stderr = self.execute_command("%s -n %s" % (self.sysrc_cmd, self.rcconf_key)) # it can happen that rcvar is not set (case of a system coming from the ports collection) # so we will fallback on the default if rc != 0: current_rcconf_value = default_rcconf_value if current_rcconf_value.strip().upper() != self.rcconf_value: self.changed = True if self.module.check_mode: self.module.exit_json(changed=True, msg="changing service enablement") rc, change_stdout, change_stderr = self.execute_command("%s %s=\"%s\"" % (self.sysrc_cmd, self.rcconf_key, self.rcconf_value)) if rc != 0: self.module.fail_json(msg="unable to set rcvar using sysrc", stdout=change_stdout, stderr=change_stderr) # sysrc does not exit with code 1 on permission error => validate successful change using service(8) rc, check_stdout, check_stderr = self.execute_command("%s %s %s" % (self.svc_cmd, self.name, "enabled")) if self.enable != (rc == 0): # rc = 0 indicates enabled service, rc = 1 indicates disabled service self.module.fail_json(msg="unable to set rcvar: sysrc did not change value", stdout=change_stdout, stderr=change_stderr) else: self.changed = False else: # Legacy (FreeBSD < 9.2) try: return self.service_enable_rcconf() except Exception: self.module.fail_json(msg='unable to set rcvar') def service_control(self): if self.action == "start": self.action = "onestart" if self.action == "stop": self.action = "onestop" if self.action == "reload": self.action = "onereload" ret = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, self.action, self.arguments)) if self.sleep: time.sleep(self.sleep) return ret class DragonFlyBsdService(FreeBsdService): """ This is the DragonFly BSD Service manipulation class - it uses the /etc/rc.conf file for controlling services started at boot and the 'service' binary to check status and perform direct service manipulation. """ platform = 'DragonFly' distribution = None def service_enable(self): if self.enable: self.rcconf_value = "YES" else: self.rcconf_value = "NO" rcfiles = ['/etc/rc.conf'] # Overkill? for rcfile in rcfiles: if os.path.isfile(rcfile): self.rcconf_file = rcfile self.rcconf_key = "%s" % self.name.replace("-", "_") return self.service_enable_rcconf() class OpenBsdService(Service): """ This is the OpenBSD Service manipulation class - it uses rcctl(8) or /etc/rc.d scripts for service control. Enabling a service is only supported if rcctl is present. """ platform = 'OpenBSD' distribution = None def get_service_tools(self): self.enable_cmd = self.module.get_bin_path('rcctl') if self.enable_cmd: self.svc_cmd = self.enable_cmd else: rcdir = '/etc/rc.d' rc_script = "%s/%s" % (rcdir, self.name) if os.path.isfile(rc_script): self.svc_cmd = rc_script if not self.svc_cmd: self.module.fail_json(msg='unable to find svc_cmd') def get_service_status(self): if self.enable_cmd: rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svc_cmd, 'check', self.name)) else: rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'check')) if stderr: self.module.fail_json(msg=stderr) if rc == 1: self.running = False elif rc == 0: self.running = True def service_control(self): if self.enable_cmd: return self.execute_command("%s -f %s %s" % (self.svc_cmd, self.action, self.name), daemonize=True) else: return self.execute_command("%s -f %s" % (self.svc_cmd, self.action)) def service_enable(self): if not self.enable_cmd: return super(OpenBsdService, self).service_enable() rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'getdef', self.name, 'flags')) if stderr: self.module.fail_json(msg=stderr) getdef_string = stdout.rstrip() # Depending on the service the string returned from 'getdef' may be # either a set of flags or the boolean YES/NO if getdef_string == "YES" or getdef_string == "NO": default_flags = '' else: default_flags = getdef_string rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'get', self.name, 'flags')) if stderr: self.module.fail_json(msg=stderr) get_string = stdout.rstrip() # Depending on the service the string returned from 'get' may be # either a set of flags or the boolean YES/NO if get_string == "YES" or get_string == "NO": current_flags = '' else: current_flags = get_string # If there are arguments from the user we use these as flags unless # they are already set. if self.arguments and self.arguments != current_flags: changed_flags = self.arguments # If the user has not supplied any arguments and the current flags # differ from the default we reset them. elif not self.arguments and current_flags != default_flags: changed_flags = ' ' # Otherwise there is no need to modify flags. else: changed_flags = '' rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'get', self.name, 'status')) if self.enable: if rc == 0 and not changed_flags: return if rc != 0: status_action = "set %s status on" % (self.name) else: status_action = '' if changed_flags: flags_action = "set %s flags %s" % (self.name, changed_flags) else: flags_action = '' else: if rc == 1: return status_action = "set %s status off" % self.name flags_action = '' # Verify state assumption if not status_action and not flags_action: self.module.fail_json(msg="neither status_action or status_flags is set, this should never happen") if self.module.check_mode: self.module.exit_json(changed=True, msg="changing service enablement") status_modified = 0 if status_action: rc, stdout, stderr = self.execute_command("%s %s" % (self.enable_cmd, status_action)) if rc != 0: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg="rcctl failed to modify service status") status_modified = 1 if flags_action: rc, stdout, stderr = self.execute_command("%s %s" % (self.enable_cmd, flags_action)) if rc != 0: if stderr: if status_modified: error_message = "rcctl modified service status but failed to set flags: " + stderr else: error_message = stderr else: if status_modified: error_message = "rcctl modified service status but failed to set flags" else: error_message = "rcctl failed to modify service flags" self.module.fail_json(msg=error_message) self.changed = True class NetBsdService(Service): """ This is the NetBSD Service manipulation class - it uses the /etc/rc.conf file for controlling services started at boot, check status and perform direct service manipulation. Init scripts in /etc/rc.d are used for controlling services (start/stop) as well as for controlling the current state. """ platform = 'NetBSD' distribution = None def get_service_tools(self): initpaths = ['/etc/rc.d'] # better: $rc_directories - how to get in here? Run: sh -c '. /etc/rc.conf ; echo $rc_directories' for initdir in initpaths: initscript = "%s/%s" % (initdir, self.name) if os.path.isfile(initscript): self.svc_initscript = initscript if not self.svc_initscript: self.module.fail_json(msg='unable to find rc.d script') def service_enable(self): if self.enable: self.rcconf_value = "YES" else: self.rcconf_value = "NO" rcfiles = ['/etc/rc.conf'] # Overkill? for rcfile in rcfiles: if os.path.isfile(rcfile): self.rcconf_file = rcfile self.rcconf_key = "%s" % self.name.replace("-", "_") return self.service_enable_rcconf() def get_service_status(self): self.svc_cmd = "%s" % self.svc_initscript rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'onestatus')) if rc == 1: self.running = False elif rc == 0: self.running = True def service_control(self): if self.action == "start": self.action = "onestart" if self.action == "stop": self.action = "onestop" self.svc_cmd = "%s" % self.svc_initscript return self.execute_command("%s %s" % (self.svc_cmd, self.action), daemonize=True) class SunOSService(Service): """ This is the SunOS Service manipulation class - it uses the svcadm command for controlling services, and svcs command for checking status. It also tries to be smart about taking the service out of maintenance state if necessary. """ platform = 'SunOS' distribution = None def get_service_tools(self): self.svcs_cmd = self.module.get_bin_path('svcs', True) if not self.svcs_cmd: self.module.fail_json(msg='unable to find svcs binary') self.svcadm_cmd = self.module.get_bin_path('svcadm', True) if not self.svcadm_cmd: self.module.fail_json(msg='unable to find svcadm binary') if self.svcadm_supports_sync(): self.svcadm_sync = '-s' else: self.svcadm_sync = '' def svcadm_supports_sync(self): # Support for synchronous restart/refresh is only supported on # Oracle Solaris >= 11.2 for line in open('/etc/release', 'r').readlines(): m = re.match(r'\s+Oracle Solaris (\d+)\.(\d+).*', line.rstrip()) if m and m.groups() >= ('11', '2'): return True def get_service_status(self): status = self.get_sunos_svcs_status() # Only 'online' is considered properly running. Everything else is off # or has some sort of problem. if status == 'online': self.running = True else: self.running = False def get_sunos_svcs_status(self): rc, stdout, stderr = self.execute_command("%s %s" % (self.svcs_cmd, self.name)) if rc == 1: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) lines = stdout.rstrip("\n").split("\n") status = lines[-1].split(" ")[0] # status is one of: online, offline, degraded, disabled, maintenance, uninitialized # see man svcs(1) return status def service_enable(self): # Get current service enablement status rc, stdout, stderr = self.execute_command("%s -l %s" % (self.svcs_cmd, self.name)) if rc != 0: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) enabled = False temporary = False # look for enabled line, which could be one of: # enabled true (temporary) # enabled false (temporary) # enabled true # enabled false for line in stdout.split("\n"): if line.startswith("enabled"): if "true" in line: enabled = True if "temporary" in line: temporary = True startup_enabled = (enabled and not temporary) or (not enabled and temporary) if self.enable and startup_enabled: return elif (not self.enable) and (not startup_enabled): return if not self.module.check_mode: # Mark service as started or stopped (this will have the side effect of # actually stopping or starting the service) if self.enable: subcmd = "enable -rs" else: subcmd = "disable -s" rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name)) if rc != 0: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) self.changed = True def service_control(self): status = self.get_sunos_svcs_status() # if starting or reloading, clear maintenance states if self.action in ['start', 'reload', 'restart'] and status in ['maintenance', 'degraded']: rc, stdout, stderr = self.execute_command("%s clear %s" % (self.svcadm_cmd, self.name)) if rc != 0: return rc, stdout, stderr status = self.get_sunos_svcs_status() if status in ['maintenance', 'degraded']: self.module.fail_json(msg="Failed to bring service out of %s status." % status) if self.action == 'start': subcmd = "enable -rst" elif self.action == 'stop': subcmd = "disable -st" elif self.action == 'reload': subcmd = "refresh %s" % (self.svcadm_sync) elif self.action == 'restart' and status == 'online': subcmd = "restart %s" % (self.svcadm_sync) elif self.action == 'restart' and status != 'online': subcmd = "enable -rst" return self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name)) class AIX(Service): """ This is the AIX Service (SRC) manipulation class - it uses lssrc, startsrc, stopsrc and refresh for service control. Enabling a service is currently not supported. Would require to add an entry in the /etc/inittab file (mkitab, chitab and rmitab commands) """ platform = 'AIX' distribution = None def get_service_tools(self): self.lssrc_cmd = self.module.get_bin_path('lssrc', True) if not self.lssrc_cmd: self.module.fail_json(msg='unable to find lssrc binary') self.startsrc_cmd = self.module.get_bin_path('startsrc', True) if not self.startsrc_cmd: self.module.fail_json(msg='unable to find startsrc binary') self.stopsrc_cmd = self.module.get_bin_path('stopsrc', True) if not self.stopsrc_cmd: self.module.fail_json(msg='unable to find stopsrc binary') self.refresh_cmd = self.module.get_bin_path('refresh', True) if not self.refresh_cmd: self.module.fail_json(msg='unable to find refresh binary') def get_service_status(self): status = self.get_aix_src_status() # Only 'active' is considered properly running. Everything else is off # or has some sort of problem. if status == 'active': self.running = True else: self.running = False def get_aix_src_status(self): # Check subsystem status rc, stdout, stderr = self.execute_command("%s -s %s" % (self.lssrc_cmd, self.name)) if rc == 1: # If check for subsystem is not ok, check if service name is a # group subsystem rc, stdout, stderr = self.execute_command("%s -g %s" % (self.lssrc_cmd, self.name)) if rc == 1: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) else: # Check all subsystem status, if one subsystem is not active # the group is considered not active. lines = stdout.splitlines() for state in lines[1:]: if state.split()[-1].strip() != "active": status = state.split()[-1].strip() break else: status = "active" # status is one of: active, inoperative return status else: lines = stdout.rstrip("\n").split("\n") status = lines[-1].split(" ")[-1] # status is one of: active, inoperative return status def service_control(self): # Check if service name is a subsystem of a group subsystem rc, stdout, stderr = self.execute_command("%s -a" % (self.lssrc_cmd)) if rc == 1: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) else: lines = stdout.splitlines() subsystems = [] groups = [] for line in lines[1:]: subsystem = line.split()[0].strip() group = line.split()[1].strip() subsystems.append(subsystem) if group: groups.append(group) # Define if service name parameter: # -s subsystem or -g group subsystem if self.name in subsystems: srccmd_parameter = "-s" elif self.name in groups: srccmd_parameter = "-g" if self.action == 'start': srccmd = self.startsrc_cmd elif self.action == 'stop': srccmd = self.stopsrc_cmd elif self.action == 'reload': srccmd = self.refresh_cmd elif self.action == 'restart': self.execute_command("%s %s %s" % (self.stopsrc_cmd, srccmd_parameter, self.name)) srccmd = self.startsrc_cmd if self.arguments and self.action == 'start': return self.execute_command("%s -a \"%s\" %s %s" % (srccmd, self.arguments, srccmd_parameter, self.name)) else: return self.execute_command("%s %s %s" % (srccmd, srccmd_parameter, self.name)) # =========================================== # Main control flow def main(): module = AnsibleModule( argument_spec=dict( name=dict(type='str', required=True), state=dict(type='str', choices=['started', 'stopped', 'reloaded', 'restarted']), sleep=dict(type='int'), pattern=dict(type='str'), enabled=dict(type='bool'), runlevel=dict(type='str', default='default'), arguments=dict(type='str', default='', aliases=['args']), ), supports_check_mode=True, required_one_of=[['state', 'enabled']], ) service = Service(module) module.debug('Service instantiated - platform %s' % service.platform) if service.distribution: module.debug('Service instantiated - distribution %s' % service.distribution) rc = 0 out = '' err = '' result = {} result['name'] = service.name # Find service management tools service.get_service_tools() # Enable/disable service startup at boot if requested if service.module.params['enabled'] is not None: # FIXME: ideally this should detect if we need to toggle the enablement state, though # it's unlikely the changed handler would need to fire in this case so it's a minor thing. service.service_enable() result['enabled'] = service.enable if module.params['state'] is None: # Not changing the running state, so bail out now. result['changed'] = service.changed module.exit_json(**result) result['state'] = service.state # Collect service status if service.pattern: service.check_ps() else: service.get_service_status() # Calculate if request will change service state service.check_service_changed() # Modify service state if necessary (rc, out, err) = service.modify_service_state() if rc != 0: if err and "Job is already running" in err: # upstart got confused, one such possibility is MySQL on Ubuntu 12.04 # where status may report it has no start/stop links and we could # not get accurate status pass else: if err: module.fail_json(msg=err) else: module.fail_json(msg=out) result['changed'] = service.changed | service.svc_change if service.module.params['enabled'] is not None: result['enabled'] = service.module.params['enabled'] if not service.module.params['state']: status = service.get_service_status() if status is None: result['state'] = 'absent' elif status is False: result['state'] = 'started' else: result['state'] = 'stopped' else: # as we may have just bounced the service the service command may not # report accurate state at this moment so just show what we ran if service.module.params['state'] in ['reloaded', 'restarted', 'started']: result['state'] = 'started' else: result['state'] = 'stopped' module.exit_json(**result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
changelogs/fragments/74135-remove-include-deprecations.yml
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
lib/ansible/config/base.yml
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- ALLOW_WORLD_READABLE_TMPFILES: name: Allow world-readable temporary files description: - This setting has been moved to the individual shell plugins as a plugin option :ref:`shell_plugins`. - The existing configuration settings are still accepted with the shell plugin adding additional options, like variables. - This message will be removed in 2.14. type: boolean default: False deprecated: # (kept for autodetection and removal, deprecation is irrelevant since w/o settings this can never show runtime msg) why: moved to shell plugins version: "2.14" alternatives: 'world_readable_tmp' ANSIBLE_CONNECTION_PATH: name: Path of ansible-connection script default: null description: - Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. - If null, ansible will start with the same directory as the ansible script. type: path env: [{name: ANSIBLE_CONNECTION_PATH}] ini: - {key: ansible_connection_path, section: persistent_connection} yaml: {key: persistent_connection.ansible_connection_path} version_added: "2.8" ANSIBLE_COW_SELECTION: name: Cowsay filter selection default: default description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them. env: [{name: ANSIBLE_COW_SELECTION}] ini: - {key: cow_selection, section: defaults} ANSIBLE_COW_ACCEPTLIST: name: Cowsay filter acceptance list default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www'] description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates. env: - name: ANSIBLE_COW_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_COW_ACCEPTLIST' - name: ANSIBLE_COW_ACCEPTLIST version_added: '2.11' ini: - key: cow_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'cowsay_enabled_stencils' - key: cowsay_enabled_stencils section: defaults version_added: '2.11' type: list ANSIBLE_FORCE_COLOR: name: Force color output default: False description: This option forces color mode even when running without a TTY or the "nocolor" setting is True. env: [{name: ANSIBLE_FORCE_COLOR}] ini: - {key: force_color, section: defaults} type: boolean yaml: {key: display.force_color} ANSIBLE_NOCOLOR: name: Suppress color output default: False description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information. env: - name: ANSIBLE_NOCOLOR # this is generic convention for CLI programs - name: NO_COLOR version_added: '2.11' ini: - {key: nocolor, section: defaults} type: boolean yaml: {key: display.nocolor} ANSIBLE_NOCOWS: name: Suppress cowsay output default: False description: If you have cowsay installed but want to avoid the 'cows' (why????), use this. env: [{name: ANSIBLE_NOCOWS}] ini: - {key: nocows, section: defaults} type: boolean yaml: {key: display.i_am_no_fun} ANSIBLE_COW_PATH: name: Set path to cowsay command default: null description: Specify a custom cowsay path or swap in your cowsay implementation of choice env: [{name: ANSIBLE_COW_PATH}] ini: - {key: cowpath, section: defaults} type: string yaml: {key: display.cowpath} ANSIBLE_PIPELINING: name: Connection pipelining default: False description: - Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. - This can result in a very significant performance improvement when enabled. - "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default." - This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled. - This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all. env: - name: ANSIBLE_PIPELINING ini: - section: defaults key: pipelining - section: connection key: pipelining type: boolean ANY_ERRORS_FATAL: name: Make Task failures fatal default: False description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors. env: - name: ANSIBLE_ANY_ERRORS_FATAL ini: - section: defaults key: any_errors_fatal type: boolean yaml: {key: errors.any_task_errors_fatal} version_added: "2.4" BECOME_ALLOW_SAME_USER: name: Allow becoming the same user default: False description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root. env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}] ini: - {key: become_allow_same_user, section: privilege_escalation} type: boolean yaml: {key: privilege_escalation.become_allow_same_user} AGNOSTIC_BECOME_PROMPT: name: Display an agnostic become prompt default: True type: boolean description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}] ini: - {key: agnostic_become_prompt, section: privilege_escalation} yaml: {key: privilege_escalation.agnostic_become_prompt} version_added: "2.5" CACHE_PLUGIN: name: Persistent Cache plugin default: memory description: Chooses which cache plugin to use, the default 'memory' is ephemeral. env: [{name: ANSIBLE_CACHE_PLUGIN}] ini: - {key: fact_caching, section: defaults} yaml: {key: facts.cache.plugin} CACHE_PLUGIN_CONNECTION: name: Cache Plugin URI default: ~ description: Defines connection or path information for the cache plugin env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}] ini: - {key: fact_caching_connection, section: defaults} yaml: {key: facts.cache.uri} CACHE_PLUGIN_PREFIX: name: Cache Plugin table prefix default: ansible_facts description: Prefix to use for cache plugin files/tables env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}] ini: - {key: fact_caching_prefix, section: defaults} yaml: {key: facts.cache.prefix} CACHE_PLUGIN_TIMEOUT: name: Cache Plugin expiration timeout default: 86400 description: Expiration timeout for the cache plugin data env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}] ini: - {key: fact_caching_timeout, section: defaults} type: integer yaml: {key: facts.cache.timeout} COLLECTIONS_SCAN_SYS_PATH: name: Scan PYTHONPATH for installed collections description: A boolean to enable or disable scanning the sys.path for installed collections default: true type: boolean env: - {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH} ini: - {key: collections_scan_sys_path, section: defaults} COLLECTIONS_PATHS: name: ordered list of root paths for loading installed Ansible collections content description: > Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``, and you want to add ``my.collection`` to that directory, it must be saved as ``~/.ansible/collections/ansible_collections/my/collection``. default: ~/.ansible/collections:/usr/share/ansible/collections type: pathspec env: - name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases. - name: ANSIBLE_COLLECTIONS_PATH version_added: '2.10' ini: - key: collections_paths section: defaults - key: collections_path section: defaults version_added: '2.10' COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH: name: Defines behavior when loading a collection that does not support the current Ansible version description: - When a collection is loaded that does not support the running Ansible version (via the collection metadata key `requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore` skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution. env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}] ini: [{key: collections_on_ansible_version_mismatch, section: defaults}] choices: [error, warning, ignore] default: warning _COLOR_DEFAULTS: &color name: placeholder for color settings' defaults choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal'] COLOR_CHANGED: <<: *color name: Color for 'changed' task status default: yellow description: Defines the color to use on 'Changed' task status env: [{name: ANSIBLE_COLOR_CHANGED}] ini: - {key: changed, section: colors} COLOR_CONSOLE_PROMPT: <<: *color name: "Color for ansible-console's prompt task status" default: white description: Defines the default color to use for ansible-console env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}] ini: - {key: console_prompt, section: colors} version_added: "2.7" COLOR_DEBUG: <<: *color name: Color for debug statements default: dark gray description: Defines the color to use when emitting debug messages env: [{name: ANSIBLE_COLOR_DEBUG}] ini: - {key: debug, section: colors} COLOR_DEPRECATE: <<: *color name: Color for deprecation messages default: purple description: Defines the color to use when emitting deprecation messages env: [{name: ANSIBLE_COLOR_DEPRECATE}] ini: - {key: deprecate, section: colors} COLOR_DIFF_ADD: <<: *color name: Color for diff added display default: green description: Defines the color to use when showing added lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_ADD}] ini: - {key: diff_add, section: colors} yaml: {key: display.colors.diff.add} COLOR_DIFF_LINES: <<: *color name: Color for diff lines display default: cyan description: Defines the color to use when showing diffs env: [{name: ANSIBLE_COLOR_DIFF_LINES}] ini: - {key: diff_lines, section: colors} COLOR_DIFF_REMOVE: <<: *color name: Color for diff removed display default: red description: Defines the color to use when showing removed lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}] ini: - {key: diff_remove, section: colors} COLOR_ERROR: <<: *color name: Color for error messages default: red description: Defines the color to use when emitting error messages env: [{name: ANSIBLE_COLOR_ERROR}] ini: - {key: error, section: colors} yaml: {key: colors.error} COLOR_HIGHLIGHT: <<: *color name: Color for highlighting default: white description: Defines the color to use for highlighting env: [{name: ANSIBLE_COLOR_HIGHLIGHT}] ini: - {key: highlight, section: colors} COLOR_OK: <<: *color name: Color for 'ok' task status default: green description: Defines the color to use when showing 'OK' task status env: [{name: ANSIBLE_COLOR_OK}] ini: - {key: ok, section: colors} COLOR_SKIP: <<: *color name: Color for 'skip' task status default: cyan description: Defines the color to use when showing 'Skipped' task status env: [{name: ANSIBLE_COLOR_SKIP}] ini: - {key: skip, section: colors} COLOR_UNREACHABLE: <<: *color name: Color for 'unreachable' host state default: bright red description: Defines the color to use on 'Unreachable' status env: [{name: ANSIBLE_COLOR_UNREACHABLE}] ini: - {key: unreachable, section: colors} COLOR_VERBOSE: <<: *color name: Color for verbose messages default: blue description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's. env: [{name: ANSIBLE_COLOR_VERBOSE}] ini: - {key: verbose, section: colors} COLOR_WARN: <<: *color name: Color for warning messages default: bright purple description: Defines the color to use when emitting warning messages env: [{name: ANSIBLE_COLOR_WARN}] ini: - {key: warn, section: colors} COVERAGE_REMOTE_OUTPUT: name: Sets the output directory and filename prefix to generate coverage run info. description: - Sets the output directory on the remote host to generate coverage reports to. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. env: - {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT} vars: - {name: _ansible_coverage_remote_output} type: str version_added: '2.9' COVERAGE_REMOTE_PATHS: name: Sets the list of paths to run coverage for. description: - A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. - Only files that match the path glob will have its coverage collected. - Multiple path globs can be specified and are separated by ``:``. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. default: '*' env: - {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER} type: str version_added: '2.9' ACTION_WARNINGS: name: Toggle action warnings default: True description: - By default Ansible will issue a warning when received from a task action (module or action plugin) - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_ACTION_WARNINGS}] ini: - {key: action_warnings, section: defaults} type: boolean version_added: "2.5" COMMAND_WARNINGS: name: Command module warnings default: False description: - Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module. - These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``. - As of version 2.11, this is disabled by default. env: [{name: ANSIBLE_COMMAND_WARNINGS}] ini: - {key: command_warnings, section: defaults} type: boolean version_added: "1.8" deprecated: why: the command warnings feature is being removed version: "2.14" LOCALHOST_WARNING: name: Warning when using implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when there are no hosts in the inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_LOCALHOST_WARNING}] ini: - {key: localhost_warning, section: defaults} type: boolean version_added: "2.6" DOC_FRAGMENT_PLUGIN_PATH: name: documentation fragment plugins path default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins. env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}] ini: - {key: doc_fragment_plugins, section: defaults} type: pathspec DEFAULT_ACTION_PLUGIN_PATH: name: Action plugins path default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action description: Colon separated paths in which Ansible will search for Action Plugins. env: [{name: ANSIBLE_ACTION_PLUGINS}] ini: - {key: action_plugins, section: defaults} type: pathspec yaml: {key: plugins.action.path} DEFAULT_ALLOW_UNSAFE_LOOKUPS: name: Allow unsafe lookups default: False description: - "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo) to return data that is not marked 'unsafe'." - By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backwards-compatibility, however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late env: [] ini: - {key: allow_unsafe_lookups, section: defaults} type: boolean version_added: "2.2.3" DEFAULT_ASK_PASS: name: Ask for the login password default: False description: - This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting. env: [{name: ANSIBLE_ASK_PASS}] ini: - {key: ask_pass, section: defaults} type: boolean yaml: {key: defaults.ask_pass} DEFAULT_ASK_VAULT_PASS: name: Ask for the vault password(s) default: False description: - This controls whether an Ansible playbook should prompt for a vault password. env: [{name: ANSIBLE_ASK_VAULT_PASS}] ini: - {key: ask_vault_pass, section: defaults} type: boolean DEFAULT_BECOME: name: Enable privilege escalation (become) default: False description: Toggles the use of privilege escalation, allowing you to 'become' another user after login. env: [{name: ANSIBLE_BECOME}] ini: - {key: become, section: privilege_escalation} type: boolean DEFAULT_BECOME_ASK_PASS: name: Ask for the privilege escalation (become) password default: False description: Toggle to prompt for privilege escalation password. env: [{name: ANSIBLE_BECOME_ASK_PASS}] ini: - {key: become_ask_pass, section: privilege_escalation} type: boolean DEFAULT_BECOME_METHOD: name: Choose privilege escalation method default: 'sudo' description: Privilege escalation method to use when `become` is enabled. env: [{name: ANSIBLE_BECOME_METHOD}] ini: - {section: privilege_escalation, key: become_method} DEFAULT_BECOME_EXE: name: Choose 'become' executable default: ~ description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH' env: [{name: ANSIBLE_BECOME_EXE}] ini: - {key: become_exe, section: privilege_escalation} DEFAULT_BECOME_FLAGS: name: Set 'become' executable options default: ~ description: Flags to pass to the privilege escalation executable. env: [{name: ANSIBLE_BECOME_FLAGS}] ini: - {key: become_flags, section: privilege_escalation} BECOME_PLUGIN_PATH: name: Become plugins path default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become description: Colon separated paths in which Ansible will search for Become Plugins. env: [{name: ANSIBLE_BECOME_PLUGINS}] ini: - {key: become_plugins, section: defaults} type: pathspec version_added: "2.8" DEFAULT_BECOME_USER: # FIXME: should really be blank and make -u passing optional depending on it name: Set the user you 'become' via privilege escalation default: root description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified. env: [{name: ANSIBLE_BECOME_USER}] ini: - {key: become_user, section: privilege_escalation} yaml: {key: become.user} DEFAULT_CACHE_PLUGIN_PATH: name: Cache Plugins Path default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache description: Colon separated paths in which Ansible will search for Cache Plugins. env: [{name: ANSIBLE_CACHE_PLUGINS}] ini: - {key: cache_plugins, section: defaults} type: pathspec CALLABLE_ACCEPT_LIST: name: Template 'callable' accept list default: [] description: Whitelist of callable methods to be made available to template evaluation env: - name: ANSIBLE_CALLABLE_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_CALLABLE_ENABLED' - name: ANSIBLE_CALLABLE_ENABLED version_added: '2.11' ini: - key: callable_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'callable_enabled' - key: callable_enabled section: defaults version_added: '2.11' type: list CONTROLLER_PYTHON_WARNING: name: Running Older than Python 3.8 Warning default: True description: Toggle to control showing warnings related to running a Python version older than Python 3.8 on the controller env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}] ini: - {key: controller_python_warning, section: defaults} type: boolean DEFAULT_CALLBACK_PLUGIN_PATH: name: Callback Plugins Path default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback description: Colon separated paths in which Ansible will search for Callback Plugins. env: [{name: ANSIBLE_CALLBACK_PLUGINS}] ini: - {key: callback_plugins, section: defaults} type: pathspec yaml: {key: plugins.callback.path} CALLBACKS_ENABLED: name: Enable callback plugins that require it. default: [] description: - "List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don't want them activated by default." env: - name: ANSIBLE_CALLBACK_WHITELIST deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'ANSIBLE_CALLBACKS_ENABLED' - name: ANSIBLE_CALLBACKS_ENABLED version_added: '2.11' ini: - key: callback_whitelist section: defaults deprecated: why: normalizing names to new standard version: "2.15" alternatives: 'callback_enabled' - key: callbacks_enabled section: defaults version_added: '2.11' type: list DEFAULT_CLICONF_PLUGIN_PATH: name: Cliconf Plugins Path default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf description: Colon separated paths in which Ansible will search for Cliconf Plugins. env: [{name: ANSIBLE_CLICONF_PLUGINS}] ini: - {key: cliconf_plugins, section: defaults} type: pathspec DEFAULT_CONNECTION_PLUGIN_PATH: name: Connection Plugins Path default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection description: Colon separated paths in which Ansible will search for Connection Plugins. env: [{name: ANSIBLE_CONNECTION_PLUGINS}] ini: - {key: connection_plugins, section: defaults} type: pathspec yaml: {key: plugins.connection.path} DEFAULT_DEBUG: name: Debug mode default: False description: - "Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no_log settings being enabled, which means debug mode should not be used in production." env: [{name: ANSIBLE_DEBUG}] ini: - {key: debug, section: defaults} type: boolean DEFAULT_EXECUTABLE: name: Target shell executable default: /bin/sh description: - "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is." env: [{name: ANSIBLE_EXECUTABLE}] ini: - {key: executable, section: defaults} DEFAULT_FACT_PATH: name: local fact path default: ~ description: - "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering." - "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``." - "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module." env: [{name: ANSIBLE_FACT_PATH}] ini: - {key: fact_path, section: defaults} type: string yaml: {key: facts.gathering.fact_path} DEFAULT_FILTER_PLUGIN_PATH: name: Jinja2 Filter Plugins Path default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins. env: [{name: ANSIBLE_FILTER_PLUGINS}] ini: - {key: filter_plugins, section: defaults} type: pathspec DEFAULT_FORCE_HANDLERS: name: Force handlers to run after failure default: False description: - This option controls if notified handlers run on a host even if a failure occurs on that host. - When false, the handlers will not run if a failure has occurred on a host. - This can also be set per play or on the command line. See Handlers and Failure for more details. env: [{name: ANSIBLE_FORCE_HANDLERS}] ini: - {key: force_handlers, section: defaults} type: boolean version_added: "1.9.1" DEFAULT_FORKS: name: Number of task forks default: 5 description: Maximum number of forks Ansible will use to execute tasks on target hosts. env: [{name: ANSIBLE_FORKS}] ini: - {key: forks, section: defaults} type: integer DEFAULT_GATHERING: name: Gathering behaviour default: 'implicit' description: - This setting controls the default policy of fact gathering (facts discovered about remote systems). - "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set." - "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play." - "The 'smart' value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run." - "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin." env: [{name: ANSIBLE_GATHERING}] ini: - key: gathering section: defaults version_added: "1.6" choices: ['smart', 'explicit', 'implicit'] DEFAULT_GATHER_SUBSET: name: Gather facts subset default: ['all'] description: - Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering. See the module documentation for specifics. - "It does **not** apply to user defined M(ansible.builtin.setup) tasks." env: [{name: ANSIBLE_GATHER_SUBSET}] ini: - key: gather_subset section: defaults version_added: "2.1" type: list DEFAULT_GATHER_TIMEOUT: name: Gather facts timeout default: 10 description: - Set the timeout in seconds for the implicit fact gathering. - "It does **not** apply to user defined M(ansible.builtin.setup) tasks." env: [{name: ANSIBLE_GATHER_TIMEOUT}] ini: - {key: gather_timeout, section: defaults} type: integer yaml: {key: defaults.gather_timeout} DEFAULT_HANDLER_INCLUDES_STATIC: name: Make handler M(ansible.builtin.include) static default: False description: - "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'." env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}] ini: - {key: handler_includes_static, section: defaults} type: boolean deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: none as its already built into the decision between include_tasks and import_tasks DEFAULT_HASH_BEHAVIOUR: name: Hash merge behaviour default: replace type: string choices: replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins). merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources. description: - This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible. - This does not affect variables whose values are scalars (integers, strings) or arrays. - "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable, leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it." - We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much complexity has been introduced into the data structures and plays. - For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars`` that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope, but the setting itself affects all sources and makes debugging even harder. - All playbooks and roles in the official examples repos assume the default for this setting. - Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables. For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file. - The Ansible project recommends you **avoid ``merge`` for new projects.** - It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it. New projects should **avoid 'merge'**. env: [{name: ANSIBLE_HASH_BEHAVIOUR}] ini: - {key: hash_behaviour, section: defaults} DEFAULT_HOST_LIST: name: Inventory Source default: /etc/ansible/hosts description: Comma separated list of Ansible inventory sources env: - name: ANSIBLE_INVENTORY expand_relative_paths: True ini: - key: inventory section: defaults type: pathlist yaml: {key: defaults.inventory} DEFAULT_HTTPAPI_PLUGIN_PATH: name: HttpApi Plugins Path default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi description: Colon separated paths in which Ansible will search for HttpApi Plugins. env: [{name: ANSIBLE_HTTPAPI_PLUGINS}] ini: - {key: httpapi_plugins, section: defaults} type: pathspec DEFAULT_INTERNAL_POLL_INTERVAL: name: Internal poll interval default: 0.001 env: [] ini: - {key: internal_poll_interval, section: defaults} type: float version_added: "2.2" description: - This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. - "The default corresponds to the value hardcoded in Ansible <= 2.1" DEFAULT_INVENTORY_PLUGIN_PATH: name: Inventory Plugins Path default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory description: Colon separated paths in which Ansible will search for Inventory Plugins. env: [{name: ANSIBLE_INVENTORY_PLUGINS}] ini: - {key: inventory_plugins, section: defaults} type: pathspec DEFAULT_JINJA2_EXTENSIONS: name: Enabled Jinja2 extensions default: [] description: - This is a developer-specific feature that allows enabling additional Jinja2 extensions. - "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)" env: [{name: ANSIBLE_JINJA2_EXTENSIONS}] ini: - {key: jinja2_extensions, section: defaults} DEFAULT_JINJA2_NATIVE: name: Use Jinja2's NativeEnvironment for templating default: False description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10. env: [{name: ANSIBLE_JINJA2_NATIVE}] ini: - {key: jinja2_native, section: defaults} type: boolean yaml: {key: jinja2_native} version_added: 2.7 DEFAULT_KEEP_REMOTE_FILES: name: Keep remote files default: False description: - Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. - If this option is enabled it will disable ``ANSIBLE_PIPELINING``. env: [{name: ANSIBLE_KEEP_REMOTE_FILES}] ini: - {key: keep_remote_files, section: defaults} type: boolean DEFAULT_LIBVIRT_LXC_NOSECLABEL: # TODO: move to plugin name: No security label on Lxc default: False description: - "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh. This is necessary when running on systems which do not have SELinux." env: - name: LIBVIRT_LXC_NOSECLABEL deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable - name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL ini: - {key: libvirt_lxc_noseclabel, section: selinux} type: boolean version_added: "2.1" DEFAULT_LOAD_CALLBACK_PLUGINS: name: Load callbacks for adhoc default: False description: - Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for ``ansible-playbook``. env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}] ini: - {key: bin_ansible_callbacks, section: defaults} type: boolean version_added: "1.8" DEFAULT_LOCAL_TMP: name: Controller temporary directory default: ~/.ansible/tmp description: Temporary directory for Ansible to use on the controller. env: [{name: ANSIBLE_LOCAL_TEMP}] ini: - {key: local_tmp, section: defaults} type: tmppath DEFAULT_LOG_PATH: name: Ansible log file path default: ~ description: File to which Ansible will log on the controller. When empty logging is disabled. env: [{name: ANSIBLE_LOG_PATH}] ini: - {key: log_path, section: defaults} type: path DEFAULT_LOG_FILTER: name: Name filters for python logger default: [] description: List of logger names to filter out of the log file env: [{name: ANSIBLE_LOG_FILTER}] ini: - {key: log_filter, section: defaults} type: list DEFAULT_LOOKUP_PLUGIN_PATH: name: Lookup Plugins Path description: Colon separated paths in which Ansible will search for Lookup Plugins. default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup env: [{name: ANSIBLE_LOOKUP_PLUGINS}] ini: - {key: lookup_plugins, section: defaults} type: pathspec yaml: {key: defaults.lookup_plugins} DEFAULT_MANAGED_STR: name: Ansible managed default: 'Ansible managed' description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules. env: [] ini: - {key: ansible_managed, section: defaults} yaml: {key: defaults.ansible_managed} DEFAULT_MODULE_ARGS: name: Adhoc default arguments default: ~ description: - This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified. env: [{name: ANSIBLE_MODULE_ARGS}] ini: - {key: module_args, section: defaults} DEFAULT_MODULE_COMPRESSION: name: Python module compression default: ZIP_DEFLATED description: Compression scheme to use when transferring Python modules to the target. env: [] ini: - {key: module_compression, section: defaults} # vars: # - name: ansible_module_compression DEFAULT_MODULE_NAME: name: Default adhoc module default: command description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``." env: [] ini: - {key: module_name, section: defaults} DEFAULT_MODULE_PATH: name: Modules Path description: Colon separated paths in which Ansible will search for Modules. default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules env: [{name: ANSIBLE_LIBRARY}] ini: - {key: library, section: defaults} type: pathspec DEFAULT_MODULE_UTILS_PATH: name: Module Utils Path description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules. default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils env: [{name: ANSIBLE_MODULE_UTILS}] ini: - {key: module_utils, section: defaults} type: pathspec DEFAULT_NETCONF_PLUGIN_PATH: name: Netconf Plugins Path default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf description: Colon separated paths in which Ansible will search for Netconf Plugins. env: [{name: ANSIBLE_NETCONF_PLUGINS}] ini: - {key: netconf_plugins, section: defaults} type: pathspec DEFAULT_NO_LOG: name: No log default: False description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures." env: [{name: ANSIBLE_NO_LOG}] ini: - {key: no_log, section: defaults} type: boolean DEFAULT_NO_TARGET_SYSLOG: name: No syslog on target default: False description: - Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writting to the event log. env: [{name: ANSIBLE_NO_TARGET_SYSLOG}] ini: - {key: no_target_syslog, section: defaults} vars: - name: ansible_no_target_syslog version_added: '2.10' type: boolean yaml: {key: defaults.no_target_syslog} DEFAULT_NULL_REPRESENTATION: name: Represent a null default: ~ description: What templating should return as a 'null' value. When not set it will let Jinja2 decide. env: [{name: ANSIBLE_NULL_REPRESENTATION}] ini: - {key: null_representation, section: defaults} type: none DEFAULT_POLL_INTERVAL: name: Async poll interval default: 15 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed. env: [{name: ANSIBLE_POLL_INTERVAL}] ini: - {key: poll_interval, section: defaults} type: integer DEFAULT_PRIVATE_KEY_FILE: name: Private key file default: ~ description: - Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation. env: [{name: ANSIBLE_PRIVATE_KEY_FILE}] ini: - {key: private_key_file, section: defaults} type: path DEFAULT_PRIVATE_ROLE_VARS: name: Private role variables default: False description: - Makes role variables inaccessible from other roles. - This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook. env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}] ini: - {key: private_role_vars, section: defaults} type: boolean yaml: {key: defaults.private_role_vars} DEFAULT_REMOTE_PORT: name: Remote port default: ~ description: Port to use in remote connections, when blank it will use the connection plugin default. env: [{name: ANSIBLE_REMOTE_PORT}] ini: - {key: remote_port, section: defaults} type: integer yaml: {key: defaults.remote_port} DEFAULT_REMOTE_USER: name: Login/Remote User description: - Sets the login user for the target machines - "When blank it uses the connection plugin's default, normally the user currently executing Ansible." env: [{name: ANSIBLE_REMOTE_USER}] ini: - {key: remote_user, section: defaults} DEFAULT_ROLES_PATH: name: Roles path default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles description: Colon separated paths in which Ansible will search for Roles. env: [{name: ANSIBLE_ROLES_PATH}] expand_relative_paths: True ini: - {key: roles_path, section: defaults} type: pathspec yaml: {key: defaults.roles_path} DEFAULT_SELINUX_SPECIAL_FS: name: Problematic file systems default: fuse, nfs, vboxsf, ramfs, 9p, vfat description: - "Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors." - Data corruption may occur and writes are not always verified when a filesystem is in the list. env: - name: ANSIBLE_SELINUX_SPECIAL_FS version_added: "2.9" ini: - {key: special_context_filesystems, section: selinux} type: list DEFAULT_STDOUT_CALLBACK: name: Main display callback plugin default: default description: - "Set the main callback used to display Ansible output, you can only have one at a time." - You can have many other callbacks, but just one can be in charge of stdout. env: [{name: ANSIBLE_STDOUT_CALLBACK}] ini: - {key: stdout_callback, section: defaults} ENABLE_TASK_DEBUGGER: name: Whether to enable the task debugger default: False description: - Whether or not to enable the task debugger, this previously was done as a strategy plugin. - Now all strategy plugins can inherit this behavior. The debugger defaults to activating when - a task is failed on unreachable. Use the debugger keyword for more flexibility. type: boolean env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}] ini: - {key: enable_task_debugger, section: defaults} version_added: "2.5" TASK_DEBUGGER_IGNORE_ERRORS: name: Whether a failed task with ignore_errors=True will still invoke the debugger default: True description: - This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True is specified. - True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors. type: boolean env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}] ini: - {key: task_debugger_ignore_errors, section: defaults} version_added: "2.7" DEFAULT_STRATEGY: name: Implied strategy default: 'linear' description: Set the default strategy used for plays. env: [{name: ANSIBLE_STRATEGY}] ini: - {key: strategy, section: defaults} version_added: "2.3" DEFAULT_STRATEGY_PLUGIN_PATH: name: Strategy Plugins Path description: Colon separated paths in which Ansible will search for Strategy Plugins. default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy env: [{name: ANSIBLE_STRATEGY_PLUGINS}] ini: - {key: strategy_plugins, section: defaults} type: pathspec DEFAULT_SU: default: False description: 'Toggle the use of "su" for tasks.' env: [{name: ANSIBLE_SU}] ini: - {key: su, section: defaults} type: boolean yaml: {key: defaults.su} DEFAULT_SYSLOG_FACILITY: name: syslog facility default: LOG_USER description: Syslog facility to use when Ansible logs to the remote target env: [{name: ANSIBLE_SYSLOG_FACILITY}] ini: - {key: syslog_facility, section: defaults} DEFAULT_TASK_INCLUDES_STATIC: name: Task include static default: False description: - The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task. env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}] ini: - {key: task_includes_static, section: defaults} type: boolean version_added: "2.1" deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: None, as its already built into the decision between include_tasks and import_tasks DEFAULT_TERMINAL_PLUGIN_PATH: name: Terminal Plugins Path default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal description: Colon separated paths in which Ansible will search for Terminal Plugins. env: [{name: ANSIBLE_TERMINAL_PLUGINS}] ini: - {key: terminal_plugins, section: defaults} type: pathspec DEFAULT_TEST_PLUGIN_PATH: name: Jinja2 Test Plugins Path description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins. default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test env: [{name: ANSIBLE_TEST_PLUGINS}] ini: - {key: test_plugins, section: defaults} type: pathspec DEFAULT_TIMEOUT: name: Connection timeout default: 10 description: This is the default timeout for connection plugins to use. env: [{name: ANSIBLE_TIMEOUT}] ini: - {key: timeout, section: defaults} type: integer DEFAULT_TRANSPORT: # note that ssh_utils refs this and needs to be updated if removed name: Connection plugin default: smart description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions" env: [{name: ANSIBLE_TRANSPORT}] ini: - {key: transport, section: defaults} DEFAULT_UNDEFINED_VAR_BEHAVIOR: name: Jinja2 fail on undefined default: True version_added: "1.3" description: - When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. - "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written." env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}] ini: - {key: error_on_undefined_vars, section: defaults} type: boolean DEFAULT_VARS_PLUGIN_PATH: name: Vars Plugins Path default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars description: Colon separated paths in which Ansible will search for Vars Plugins. env: [{name: ANSIBLE_VARS_PLUGINS}] ini: - {key: vars_plugins, section: defaults} type: pathspec # TODO: unused? #DEFAULT_VAR_COMPRESSION_LEVEL: # default: 0 # description: 'TODO: write it' # env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}] # ini: # - {key: var_compression_level, section: defaults} # type: integer # yaml: {key: defaults.var_compression_level} DEFAULT_VAULT_ID_MATCH: name: Force vault id match default: False description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id' env: [{name: ANSIBLE_VAULT_ID_MATCH}] ini: - {key: vault_id_match, section: defaults} yaml: {key: defaults.vault_id_match} DEFAULT_VAULT_IDENTITY: name: Vault id label default: default description: 'The label to use for the default vault id label in cases where a vault id label is not provided' env: [{name: ANSIBLE_VAULT_IDENTITY}] ini: - {key: vault_identity, section: defaults} yaml: {key: defaults.vault_identity} DEFAULT_VAULT_ENCRYPT_IDENTITY: name: Vault id to use for encryption description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.' env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}] ini: - {key: vault_encrypt_identity, section: defaults} yaml: {key: defaults.vault_encrypt_identity} DEFAULT_VAULT_IDENTITY_LIST: name: Default vault ids default: [] description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.' env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}] ini: - {key: vault_identity_list, section: defaults} type: list yaml: {key: defaults.vault_identity_list} DEFAULT_VAULT_PASSWORD_FILE: name: Vault password file default: ~ description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id' env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}] ini: - {key: vault_password_file, section: defaults} type: path yaml: {key: defaults.vault_password_file} DEFAULT_VERBOSITY: name: Verbosity default: 0 description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line. env: [{name: ANSIBLE_VERBOSITY}] ini: - {key: verbosity, section: defaults} type: integer DEPRECATION_WARNINGS: name: Deprecation messages default: True description: "Toggle to control the showing of deprecation warnings" env: [{name: ANSIBLE_DEPRECATION_WARNINGS}] ini: - {key: deprecation_warnings, section: defaults} type: boolean DEVEL_WARNING: name: Running devel warning default: True description: Toggle to control showing warnings related to running devel env: [{name: ANSIBLE_DEVEL_WARNING}] ini: - {key: devel_warning, section: defaults} type: boolean DIFF_ALWAYS: name: Show differences default: False description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``. env: [{name: ANSIBLE_DIFF_ALWAYS}] ini: - {key: always, section: diff} type: bool DIFF_CONTEXT: name: Difference context default: 3 description: How many lines of context to show when displaying the differences between files. env: [{name: ANSIBLE_DIFF_CONTEXT}] ini: - {key: context, section: diff} type: integer DISPLAY_ARGS_TO_STDOUT: name: Show task arguments default: False description: - "Normally ``ansible-playbook`` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header." - "This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed." - "If you set this to True you should be sure that you have secured your environment's stdout (no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information." env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}] ini: - {key: display_args_to_stdout, section: defaults} type: boolean version_added: "2.1" DISPLAY_SKIPPED_HOSTS: name: Show skipped results default: True description: "Toggle to control displaying skipped task/host entries in a task in the default callback" env: - name: DISPLAY_SKIPPED_HOSTS deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable - name: ANSIBLE_DISPLAY_SKIPPED_HOSTS ini: - {key: display_skipped_hosts, section: defaults} type: boolean DOCSITE_ROOT_URL: name: Root docsite URL default: https://docs.ansible.com/ansible-core/ description: Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash. ini: - {key: docsite_root_url, section: defaults} version_added: "2.8" DUPLICATE_YAML_DICT_KEY: name: Controls ansible behaviour when finding duplicate keys in YAML. default: warn description: - By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}] ini: - {key: duplicate_dict_key, section: defaults} type: string choices: ['warn', 'error', 'ignore'] version_added: "2.9" ERROR_ON_MISSING_HANDLER: name: Missing handler error default: True description: "Toggle to allow missing handlers to become a warning instead of an error when notifying." env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}] ini: - {key: error_on_missing_handler, section: defaults} type: boolean CONNECTION_FACTS_MODULES: name: Map of connections to fact modules default: # use ansible.legacy names on unqualified facts modules to allow library/ overrides asa: ansible.legacy.asa_facts cisco.asa.asa: cisco.asa.asa_facts eos: ansible.legacy.eos_facts arista.eos.eos: arista.eos.eos_facts frr: ansible.legacy.frr_facts frr.frr.frr: frr.frr.frr_facts ios: ansible.legacy.ios_facts cisco.ios.ios: cisco.ios.ios_facts iosxr: ansible.legacy.iosxr_facts cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts junos: ansible.legacy.junos_facts junipernetworks.junos.junos: junipernetworks.junos.junos_facts nxos: ansible.legacy.nxos_facts cisco.nxos.nxos: cisco.nxos.nxos_facts vyos: ansible.legacy.vyos_facts vyos.vyos.vyos: vyos.vyos.vyos_facts exos: ansible.legacy.exos_facts extreme.exos.exos: extreme.exos.exos_facts slxos: ansible.legacy.slxos_facts extreme.slxos.slxos: extreme.slxos.slxos_facts voss: ansible.legacy.voss_facts extreme.voss.voss: extreme.voss.voss_facts ironware: ansible.legacy.ironware_facts community.network.ironware: community.network.ironware_facts description: "Which modules to run during a play's fact gathering stage based on connection" type: dict FACTS_MODULES: name: Gather Facts Modules default: - smart description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type." env: [{name: ANSIBLE_FACTS_MODULES}] ini: - {key: facts_modules, section: defaults} type: list vars: - name: ansible_facts_modules GALAXY_IGNORE_CERTS: name: Galaxy validate certs default: False description: - If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate. env: [{name: ANSIBLE_GALAXY_IGNORE}] ini: - {key: ignore_certs, section: galaxy} type: boolean GALAXY_ROLE_SKELETON: name: Galaxy role or collection skeleton directory description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``. env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}] ini: - {key: role_skeleton, section: galaxy} type: path GALAXY_ROLE_SKELETON_IGNORE: name: Galaxy skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy role or collection skeleton directory env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}] ini: - {key: role_skeleton_ignore, section: galaxy} type: list # TODO: unused? #GALAXY_SCMS: # name: Galaxy SCMS # default: git, hg # description: Available galaxy source control management systems. # env: [{name: ANSIBLE_GALAXY_SCMS}] # ini: # - {key: scms, section: galaxy} # type: list GALAXY_SERVER: default: https://galaxy.ansible.com description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source." env: [{name: ANSIBLE_GALAXY_SERVER}] ini: - {key: server, section: galaxy} yaml: {key: galaxy.server} GALAXY_SERVER_LIST: description: - A list of Galaxy servers to use when installing a collection. - The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details. - 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.' - The order of servers in this list is used to as the order in which a collection is resolved. - Setting this config option will ignore the :ref:`galaxy_server` config option. env: [{name: ANSIBLE_GALAXY_SERVER_LIST}] ini: - {key: server_list, section: galaxy} type: list version_added: "2.9" GALAXY_TOKEN_PATH: default: ~/.ansible/galaxy_token description: "Local path to galaxy access token file" env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}] ini: - {key: token_path, section: galaxy} type: path version_added: "2.9" GALAXY_DISPLAY_PROGRESS: default: ~ description: - Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file. - This config option controls whether the display wheel is shown or not. - The default is to show the display wheel if stdout has a tty. env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}] ini: - {key: display_progress, section: galaxy} type: bool version_added: "2.10" GALAXY_CACHE_DIR: default: ~/.ansible/galaxy_cache description: - The directory that stores cached responses from a Galaxy server. - This is only used by the ``ansible-galaxy collection install`` and ``download`` commands. - Cache files inside this dir will be ignored if they are world writable. env: - name: ANSIBLE_GALAXY_CACHE_DIR ini: - section: galaxy key: cache_dir type: path version_added: '2.11' HOST_KEY_CHECKING: # note: constant not in use by ssh plugin anymore # TODO: check non ssh connection plugins for use/migration name: Check host keys default: True description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' env: [{name: ANSIBLE_HOST_KEY_CHECKING}] ini: - {key: host_key_checking, section: defaults} type: boolean HOST_PATTERN_MISMATCH: name: Control host pattern mismatch behaviour default: 'warning' description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}] ini: - {key: host_pattern_mismatch, section: inventory} choices: ['warning', 'error', 'ignore'] version_added: "2.8" INTERPRETER_PYTHON: name: Python interpreter path (or automatic discovery behavior) used for module execution default: auto_legacy env: [{name: ANSIBLE_PYTHON_INTERPRETER}] ini: - {key: interpreter_python, section: defaults} vars: - {name: ansible_python_interpreter} version_added: "2.8" description: - Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the default behavior will change to that of ``auto`` in a future Ansible release. INTERPRETER_PYTHON_DISTRO_MAP: name: Mapping of known included platform pythons for various Linux distros default: centos: &rhelish '6': /usr/bin/python '8': /usr/libexec/platform-python '9': /usr/bin/python3 debian: '8': /usr/bin/python '10': /usr/bin/python3 fedora: '23': /usr/bin/python3 oracle: *rhelish redhat: *rhelish rhel: *rhelish ubuntu: '14': /usr/bin/python '16': /usr/bin/python3 version_added: "2.8" # FUTURE: add inventory override once we're sure it can't be abused by a rogue target # FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc? INTERPRETER_PYTHON_FALLBACK: name: Ordered list of Python interpreters to check for in discovery default: - /usr/bin/python - python3.9 - python3.8 - python3.7 - python3.6 - python3.5 - python2.7 - python2.6 - /usr/libexec/platform-python - /usr/bin/python3 - python # FUTURE: add inventory override once we're sure it can't be abused by a rogue target version_added: "2.8" TRANSFORM_INVALID_GROUP_CHARS: name: Transform invalid characters in group names default: 'never' description: - Make ansible transform invalid characters in group names supplied by inventory sources. - If 'never' it will allow for the group name but warn about the issue. - When 'ignore', it does the same as 'never', without issuing a warning. - When 'always' it will replace any invalid characters with '_' (underscore) and warn the user - When 'silently', it does the same as 'always', without issuing a warning. env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}] ini: - {key: force_valid_group_names, section: defaults} type: string choices: ['always', 'never', 'ignore', 'silently'] version_added: '2.8' INVALID_TASK_ATTRIBUTE_FAILED: name: Controls whether invalid attributes for a task result in errors instead of warnings default: True description: If 'false', invalid attributes for a task will result in warnings instead of errors type: boolean env: - name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED ini: - key: invalid_task_attribute_failed section: defaults version_added: "2.7" INVENTORY_ANY_UNPARSED_IS_FAILED: name: Controls whether any unparseable inventory source is a fatal error default: False description: > If 'true', it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning. type: boolean env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}] ini: - {key: any_unparsed_is_failed, section: inventory} version_added: "2.7" INVENTORY_CACHE_ENABLED: name: Inventory caching enabled default: False description: - Toggle to turn on inventory caching. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE}] ini: - {key: cache, section: inventory} type: bool INVENTORY_CACHE_PLUGIN: name: Inventory cache plugin description: - The plugin for caching inventory. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}] ini: - {key: cache_plugin, section: inventory} INVENTORY_CACHE_PLUGIN_CONNECTION: name: Inventory cache plugin URI to override the defaults section description: - The inventory cache connection. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}] ini: - {key: cache_connection, section: inventory} INVENTORY_CACHE_PLUGIN_PREFIX: name: Inventory cache plugin table prefix description: - The table prefix for the cache plugin. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}] default: ansible_inventory_ ini: - {key: cache_prefix, section: inventory} INVENTORY_CACHE_TIMEOUT: name: Inventory cache plugin expiration timeout description: - Expiration timeout for the inventory cache plugin data. - This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`. - The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration. - This message will be removed in 2.16. default: 3600 env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}] ini: - {key: cache_timeout, section: inventory} INVENTORY_ENABLED: name: Active Inventory plugins default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml'] description: List of enabled inventory plugins, it also determines the order in which they are used. env: [{name: ANSIBLE_INVENTORY_ENABLED}] ini: - {key: enable_plugins, section: inventory} type: list INVENTORY_EXPORT: name: Set ansible-inventory into export mode default: False description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting. env: [{name: ANSIBLE_INVENTORY_EXPORT}] ini: - {key: export, section: inventory} type: bool INVENTORY_IGNORE_EXTS: name: Inventory ignore extensions default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}" description: List of extensions to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE}] ini: - {key: inventory_ignore_extensions, section: defaults} - {key: ignore_extensions, section: inventory} type: list INVENTORY_IGNORE_PATTERNS: name: Inventory ignore patterns default: [] description: List of patterns to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}] ini: - {key: inventory_ignore_patterns, section: defaults} - {key: ignore_patterns, section: inventory} type: list INVENTORY_UNPARSED_IS_FAILED: name: Unparsed Inventory failure default: False description: > If 'true' it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning. env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}] ini: - {key: unparsed_is_failed, section: inventory} type: bool MAX_FILE_SIZE_FOR_DIFF: name: Diff maximum file size default: 104448 description: Maximum size of files to be considered for diff display env: [{name: ANSIBLE_MAX_DIFF_SIZE}] ini: - {key: max_diff_size, section: defaults} type: int NETWORK_GROUP_MODULES: name: Network module families default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos] description: 'TODO: write it' env: - name: NETWORK_GROUP_MODULES deprecated: why: environment variables without ``ANSIBLE_`` prefix are deprecated version: "2.12" alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable - name: ANSIBLE_NETWORK_GROUP_MODULES ini: - {key: network_group_modules, section: defaults} type: list yaml: {key: defaults.network_group_modules} INJECT_FACTS_AS_VARS: default: True description: - Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. - Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix. env: [{name: ANSIBLE_INJECT_FACT_VARS}] ini: - {key: inject_facts_as_vars, section: defaults} type: boolean version_added: "2.5" MODULE_IGNORE_EXTS: name: Module ignore extensions default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}" description: - List of extensions to ignore when looking for modules to load - This is for rejecting script and binary module fallback extensions env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}] ini: - {key: module_ignore_exts, section: defaults} type: list OLD_PLUGIN_CACHE_CLEARING: description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour. env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}] ini: - {key: old_plugin_cache_clear, section: defaults} type: boolean default: False version_added: "2.8" PARAMIKO_HOST_KEY_AUTO_ADD: # TODO: move to plugin default: False description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean PARAMIKO_LOOK_FOR_KEYS: name: look for keys default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean PERSISTENT_CONTROL_PATH_DIR: name: Persistence socket path default: ~/.ansible/pc description: Path to socket to be used by the connection persistence system. env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: persistent_connection} type: path PERSISTENT_CONNECT_TIMEOUT: name: Persistence timeout default: 30 description: This controls how long the persistent connection will remain idle before it is destroyed. env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}] ini: - {key: connect_timeout, section: persistent_connection} type: integer PERSISTENT_CONNECT_RETRY_TIMEOUT: name: Persistence connection retry timeout default: 15 description: This controls the retry timeout for persistent connection to connect to the local domain socket. env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}] ini: - {key: connect_retry_timeout, section: persistent_connection} type: integer PERSISTENT_COMMAND_TIMEOUT: name: Persistence command timeout default: 30 description: This controls the amount of time to wait for response from remote device before timing out persistent connection. env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}] ini: - {key: command_timeout, section: persistent_connection} type: int PLAYBOOK_DIR: name: playbook dir override for non-playbook CLIs (ala --playbook-dir) version_added: "2.9" description: - A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it. env: [{name: ANSIBLE_PLAYBOOK_DIR}] ini: [{key: playbook_dir, section: defaults}] type: path PLAYBOOK_VARS_ROOT: name: playbook vars files root default: top version_added: "2.4.1" description: - This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars - The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory. - The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory. - The ``all`` option examines from the first parent to the current playbook. env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}] ini: - {key: playbook_vars_root, section: defaults} choices: [ top, bottom, all ] PLUGIN_FILTERS_CFG: name: Config file for limiting valid plugins default: null version_added: "2.5.0" description: - "A path to configuration for filtering which plugins installed on the system are allowed to be used." - "See :ref:`plugin_filtering_config` for details of the filter file's format." - " The default is /etc/ansible/plugin_filters.yml" ini: - key: plugin_filters_cfg section: default deprecated: why: specifying "plugin_filters_cfg" under the "default" section is deprecated version: "2.12" alternatives: the "defaults" section instead - key: plugin_filters_cfg section: defaults type: path PYTHON_MODULE_RLIMIT_NOFILE: name: Adjust maximum file descriptor soft limit during Python module execution description: - Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits. default: 0 env: - {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE} ini: - {key: python_module_rlimit_nofile, section: defaults} vars: - {name: ansible_python_module_rlimit_nofile} version_added: '2.8' RETRY_FILES_ENABLED: name: Retry files default: False description: This controls whether a failed Ansible playbook should create a .retry file. env: [{name: ANSIBLE_RETRY_FILES_ENABLED}] ini: - {key: retry_files_enabled, section: defaults} type: bool RETRY_FILES_SAVE_PATH: name: Retry files path default: ~ description: - This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. - This file will be overwritten after each run with the list of failed hosts from all plays. env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}] ini: - {key: retry_files_save_path, section: defaults} type: path RUN_VARS_PLUGINS: name: When should vars plugins run relative to inventory default: demand description: - This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection. - Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks. - Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source. env: [{name: ANSIBLE_RUN_VARS_PLUGINS}] ini: - {key: run_vars_plugins, section: defaults} type: str choices: ['demand', 'start'] version_added: "2.10" SHOW_CUSTOM_STATS: name: Display custom stats default: False description: 'This adds the custom stats set via the set_stats plugin to the default output' env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}] ini: - {key: show_custom_stats, section: defaults} type: bool STRING_TYPE_FILTERS: name: Filters to preserve strings default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json] description: - "This list of filters avoids 'type conversion' when templating variables" - Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example. env: [{name: ANSIBLE_STRING_TYPE_FILTERS}] ini: - {key: dont_type_filters, section: jinja2} type: list SYSTEM_WARNINGS: name: System warnings default: True description: - Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) - These may include warnings about 3rd party packages or other conditions that should be resolved if possible. env: [{name: ANSIBLE_SYSTEM_WARNINGS}] ini: - {key: system_warnings, section: defaults} type: boolean TAGS_RUN: name: Run Tags default: [] type: list description: default list of tags to run in your plays, Skip Tags has precedence. env: [{name: ANSIBLE_RUN_TAGS}] ini: - {key: run, section: tags} version_added: "2.5" TAGS_SKIP: name: Skip Tags default: [] type: list description: default list of tags to skip in your plays, has precedence over Run Tags env: [{name: ANSIBLE_SKIP_TAGS}] ini: - {key: skip, section: tags} version_added: "2.5" TASK_TIMEOUT: name: Task Timeout default: 0 description: - Set the maximum time (in seconds) that a task can run for. - If set to 0 (the default) there is no timeout. env: [{name: ANSIBLE_TASK_TIMEOUT}] ini: - {key: task_timeout, section: defaults} type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_COUNT: name: Worker Shutdown Poll Count default: 0 description: - The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly. - After this limit is reached any worker processes still running will be terminated. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}] type: integer version_added: '2.10' WORKER_SHUTDOWN_POLL_DELAY: name: Worker Shutdown Poll Delay default: 0.1 description: - The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly. - This is for internal use only. env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}] type: float version_added: '2.10' USE_PERSISTENT_CONNECTIONS: name: Persistence default: False description: Toggles the use of persistence for connections. env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}] ini: - {key: use_persistent_connections, section: defaults} type: boolean VARIABLE_PLUGINS_ENABLED: name: Vars plugin enabled list default: ['host_group_vars'] description: Whitelist for variable plugins that require it. env: [{name: ANSIBLE_VARS_ENABLED}] ini: - {key: vars_plugins_enabled, section: defaults} type: list version_added: "2.10" VARIABLE_PRECEDENCE: name: Group variable precedence default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play'] description: Allows to change the group variable precedence merge order. env: [{name: ANSIBLE_PRECEDENCE}] ini: - {key: precedence, section: defaults} type: list version_added: "2.4" WIN_ASYNC_STARTUP_TIMEOUT: name: Windows Async Startup Timeout default: 5 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load. - This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here. env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}] ini: - {key: win_async_startup_timeout, section: defaults} type: integer vars: - {name: ansible_win_async_startup_timeout} version_added: '2.10' YAML_FILENAME_EXTENSIONS: name: Valid YAML extensions default: [".yml", ".yaml", ".json"] description: - "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these." - 'This affects vars_files, include_vars, inventory and vars plugins among others.' env: - name: ANSIBLE_YAML_FILENAME_EXT ini: - section: defaults key: yaml_valid_extensions type: list NETCONF_SSH_CONFIG: description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings. env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}] ini: - {key: ssh_config, section: netconf_connection} yaml: {key: netconf_connection.ssh_config} default: null STRING_CONVERSION_ACTION: version_added: '2.8' description: - Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc. will be converted by the YAML parser unless fully quoted. - Valid options are 'error', 'warn', and 'ignore'. - Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12. default: 'warn' env: - name: ANSIBLE_STRING_CONVERSION_ACTION ini: - section: defaults key: string_conversion_action type: string VERBOSE_TO_STDERR: version_added: '2.8' description: - Force 'verbose' option to use stderr instead of stdout default: False env: - name: ANSIBLE_VERBOSE_TO_STDERR ini: - section: defaults key: verbose_to_stderr type: bool ...
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
lib/ansible/playbook/helpers.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os from ansible import constants as C from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError from ansible.module_utils._text import to_native from ansible.module_utils.six import string_types from ansible.parsing.mod_args import ModuleArgsParser from ansible.utils.display import Display display = Display() def load_list_of_blocks(ds, play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None): ''' Given a list of mixed task/block data (parsed from YAML), return a list of Block() objects, where implicit blocks are created for each bare Task. ''' # we import here to prevent a circular dependency with imports from ansible.playbook.block import Block if not isinstance(ds, (list, type(None))): raise AnsibleAssertionError('%s should be a list or None but is %s' % (ds, type(ds))) block_list = [] if ds: count = iter(range(len(ds))) for i in count: block_ds = ds[i] # Implicit blocks are created by bare tasks listed in a play without # an explicit block statement. If we have two implicit blocks in a row, # squash them down to a single block to save processing time later. implicit_blocks = [] while block_ds is not None and not Block.is_block(block_ds): implicit_blocks.append(block_ds) i += 1 # Advance the iterator, so we don't repeat next(count, None) try: block_ds = ds[i] except IndexError: block_ds = None # Loop both implicit blocks and block_ds as block_ds is the next in the list for b in (implicit_blocks, block_ds): if b: block_list.append( Block.load( b, play=play, parent_block=parent_block, role=role, task_include=task_include, use_handlers=use_handlers, variable_manager=variable_manager, loader=loader, ) ) return block_list def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None): ''' Given a list of task datastructures (parsed from YAML), return a list of Task() or TaskInclude() objects. ''' # we import here to prevent a circular dependency with imports from ansible.playbook.block import Block from ansible.playbook.handler import Handler from ansible.playbook.task import Task from ansible.playbook.task_include import TaskInclude from ansible.playbook.role_include import IncludeRole from ansible.playbook.handler_task_include import HandlerTaskInclude from ansible.template import Templar if not isinstance(ds, list): raise AnsibleAssertionError('The ds (%s) should be a list but was a %s' % (ds, type(ds))) task_list = [] for task_ds in ds: if not isinstance(task_ds, dict): raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds))) if 'block' in task_ds: t = Block.load( task_ds, play=play, parent_block=block, role=role, task_include=task_include, use_handlers=use_handlers, variable_manager=variable_manager, loader=loader, ) task_list.append(t) else: args_parser = ModuleArgsParser(task_ds) try: (action, args, delegate_to) = args_parser.parse(skip_action_validation=True) except AnsibleParserError as e: # if the raises exception was created with obj=ds args, then it includes the detail # so we dont need to add it so we can just re raise. if e.obj: raise # But if it wasn't, we can add the yaml object now to get more detail raise AnsibleParserError(to_native(e), obj=task_ds, orig_exc=e) if action in C._ACTION_ALL_INCLUDE_IMPORT_TASKS: if use_handlers: include_class = HandlerTaskInclude else: include_class = TaskInclude t = include_class.load( task_ds, block=block, role=role, task_include=None, variable_manager=variable_manager, loader=loader ) all_vars = variable_manager.get_vars(play=play, task=t) templar = Templar(loader=loader, variables=all_vars) # check to see if this include is dynamic or static: # 1. the user has set the 'static' option to false or true # 2. one of the appropriate config options was set if action in C._ACTION_INCLUDE_TASKS: is_static = False elif action in C._ACTION_IMPORT_TASKS: is_static = True elif t.static is not None: display.deprecated("The use of 'static' has been deprecated. " "Use 'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion", version='2.12', collection_name='ansible.builtin') is_static = t.static else: is_static = C.DEFAULT_TASK_INCLUDES_STATIC or \ (use_handlers and C.DEFAULT_HANDLER_INCLUDES_STATIC) or \ (not templar.is_template(t.args['_raw_params']) and t.all_parents_static() and not t.loop) if is_static: if t.loop is not None: if action in C._ACTION_IMPORT_TASKS: raise AnsibleParserError("You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead.", obj=task_ds) else: raise AnsibleParserError("You cannot use 'static' on an include with a loop", obj=task_ds) # we set a flag to indicate this include was static t.statically_loaded = True # handle relative includes by walking up the list of parent include # tasks and checking the relative result to see if it exists parent_include = block cumulative_path = None found = False subdir = 'tasks' if use_handlers: subdir = 'handlers' while parent_include is not None: if not isinstance(parent_include, TaskInclude): parent_include = parent_include._parent continue try: parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params'))) except AnsibleUndefinedVariable as e: if not parent_include.statically_loaded: raise AnsibleParserError( "Error when evaluating variable in dynamic parent include path: %s. " "When using static imports, the parent dynamic include cannot utilize host facts " "or variables from inventory" % parent_include.args.get('_raw_params'), obj=task_ds, suppress_extended_error=True, orig_exc=e ) raise if cumulative_path is None: cumulative_path = parent_include_dir elif not os.path.isabs(cumulative_path): cumulative_path = os.path.join(parent_include_dir, cumulative_path) include_target = templar.template(t.args['_raw_params']) if t._role: new_basedir = os.path.join(t._role._role_path, subdir, cumulative_path) include_file = loader.path_dwim_relative(new_basedir, subdir, include_target) else: include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target) if os.path.exists(include_file): found = True break else: parent_include = parent_include._parent if not found: try: include_target = templar.template(t.args['_raw_params']) except AnsibleUndefinedVariable as e: raise AnsibleParserError( "Error when evaluating variable in import path: %s.\n\n" "When using static imports, ensure that any variables used in their names are defined in vars/vars_files\n" "or extra-vars passed in from the command line. Static imports cannot use variables from facts or inventory\n" "sources like group or host vars." % t.args['_raw_params'], obj=task_ds, suppress_extended_error=True, orig_exc=e) if t._role: include_file = loader.path_dwim_relative(t._role._role_path, subdir, include_target) else: include_file = loader.path_dwim(include_target) try: data = loader.load_from_file(include_file) if not data: display.warning('file %s is empty and had no tasks to include' % include_file) continue elif not isinstance(data, list): raise AnsibleParserError("included task files must contain a list of tasks", obj=data) # since we can't send callbacks here, we display a message directly in # the same fashion used by the on_include callback. We also do it here, # because the recursive nature of helper methods means we may be loading # nested includes, and we want the include order printed correctly display.vv("statically imported: %s" % include_file) except AnsibleFileNotFound: if action not in C._ACTION_INCLUDE or t.static or \ C.DEFAULT_TASK_INCLUDES_STATIC or \ C.DEFAULT_HANDLER_INCLUDES_STATIC and use_handlers: raise display.deprecated( "Included file '%s' not found, however since this include is not " "explicitly marked as 'static: yes', we will try and include it dynamically " "later. In the future, this will be an error unless 'static: no' is used " "on the include task. If you do not want missing includes to be considered " "dynamic, use 'static: yes' on the include or set the global ansible.cfg " "options to make all includes static for tasks and/or handlers" % include_file, version="2.12", collection_name='ansible.builtin' ) task_list.append(t) continue ti_copy = t.copy(exclude_parent=True) ti_copy._parent = block included_blocks = load_list_of_blocks( data, play=play, parent_block=None, task_include=ti_copy, role=role, use_handlers=use_handlers, loader=loader, variable_manager=variable_manager, ) # FIXME: remove once 'include' is removed # pop tags out of the include args, if they were specified there, and assign # them to the include. If the include already had tags specified, we raise an # error so that users know not to specify them both ways tags = ti_copy.vars.pop('tags', []) if isinstance(tags, string_types): tags = tags.split(',') if len(tags) > 0: if action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_TASKS: raise AnsibleParserError('You cannot specify "tags" inline to the task, it is a task keyword') if len(ti_copy.tags) > 0: raise AnsibleParserError( "Include tasks should not specify tags in more than one way (both via args and directly on the task). " "Mixing styles in which tags are specified is prohibited for whole import hierarchy, not only for single import statement", obj=task_ds, suppress_extended_error=True, ) display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option", version="2.12", collection_name='ansible.builtin') else: tags = ti_copy.tags[:] # now we extend the tags on each of the included blocks for b in included_blocks: b.tags = list(set(b.tags).union(tags)) # END FIXME # FIXME: handlers shouldn't need this special handling, but do # right now because they don't iterate blocks correctly if use_handlers: for b in included_blocks: task_list.extend(b.block) else: task_list.extend(included_blocks) else: t.is_static = False task_list.append(t) elif action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES: ir = IncludeRole.load( task_ds, block=block, role=role, task_include=None, variable_manager=variable_manager, loader=loader, ) # 1. the user has set the 'static' option to false or true # 2. one of the appropriate config options was set is_static = False if action in C._ACTION_IMPORT_ROLE: is_static = True elif ir.static is not None: display.deprecated("The use of 'static' for 'include_role' has been deprecated. " "Use 'import_role' for static inclusion, or 'include_role' for dynamic inclusion", version='2.12', collection_name='ansible.builtin') is_static = ir.static if is_static: if ir.loop is not None: if action in C._ACTION_IMPORT_ROLE: raise AnsibleParserError("You cannot use loops on 'import_role' statements. You should use 'include_role' instead.", obj=task_ds) else: raise AnsibleParserError("You cannot use 'static' on an include_role with a loop", obj=task_ds) # we set a flag to indicate this include was static ir.statically_loaded = True # template the role name now, if needed all_vars = variable_manager.get_vars(play=play, task=ir) templar = Templar(loader=loader, variables=all_vars) ir._role_name = templar.template(ir._role_name) # uses compiled list from object blocks, _ = ir.get_block_list(variable_manager=variable_manager, loader=loader) task_list.extend(blocks) else: # passes task object itself for latter generation of list task_list.append(ir) else: if use_handlers: t = Handler.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader) else: t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader) task_list.append(t) return task_list def load_list_of_roles(ds, play, current_role_path=None, variable_manager=None, loader=None, collection_search_list=None): """ Loads and returns a list of RoleInclude objects from the ds list of role definitions :param ds: list of roles to load :param play: calling Play object :param current_role_path: path of the owning role, if any :param variable_manager: varmgr to use for templating :param loader: loader to use for DS parsing/services :param collection_search_list: list of collections to search for unqualified role names :return: """ # we import here to prevent a circular dependency with imports from ansible.playbook.role.include import RoleInclude if not isinstance(ds, list): raise AnsibleAssertionError('ds (%s) should be a list but was a %s' % (ds, type(ds))) roles = [] for role_def in ds: i = RoleInclude.load(role_def, play=play, current_role_path=current_role_path, variable_manager=variable_manager, loader=loader, collection_list=collection_search_list) roles.append(i) return roles
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
lib/ansible/playbook/included_file.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os from ansible import constants as C from ansible.errors import AnsibleError from ansible.module_utils._text import to_text from ansible.playbook.handler import Handler from ansible.playbook.task_include import TaskInclude from ansible.playbook.role_include import IncludeRole from ansible.template import Templar from ansible.utils.display import Display display = Display() class IncludedFile: def __init__(self, filename, args, vars, task, is_role=False): self._filename = filename self._args = args self._vars = vars self._task = task self._hosts = [] self._is_role = is_role def add_host(self, host): if host not in self._hosts: self._hosts.append(host) return raise ValueError() def __eq__(self, other): return (other._filename == self._filename and other._args == self._args and other._vars == self._vars and other._task._uuid == self._task._uuid and other._task._parent._uuid == self._task._parent._uuid) def __repr__(self): return "%s (args=%s vars=%s): %s" % (self._filename, self._args, self._vars, self._hosts) @staticmethod def process_include_results(results, iterator, loader, variable_manager): included_files = [] task_vars_cache = {} for res in results: original_host = res._host original_task = res._task if original_task.action in C._ACTION_ALL_INCLUDES: if original_task.action in C._ACTION_INCLUDE: display.deprecated('"include" is deprecated, use include_tasks/import_tasks/import_playbook instead', "2.16") if original_task.loop: if 'results' not in res._result: continue include_results = res._result['results'] else: include_results = [res._result] for include_result in include_results: # if the task result was skipped or failed, continue if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result and include_result['failed']: continue cache_key = (iterator._play, original_host, original_task) try: task_vars = task_vars_cache[cache_key] except KeyError: task_vars = task_vars_cache[cache_key] = variable_manager.get_vars(play=iterator._play, host=original_host, task=original_task) include_args = include_result.get('include_args', dict()) special_vars = {} loop_var = include_result.get('ansible_loop_var', 'item') index_var = include_result.get('ansible_index_var') if loop_var in include_result: task_vars[loop_var] = special_vars[loop_var] = include_result[loop_var] if index_var and index_var in include_result: task_vars[index_var] = special_vars[index_var] = include_result[index_var] if '_ansible_item_label' in include_result: task_vars['_ansible_item_label'] = special_vars['_ansible_item_label'] = include_result['_ansible_item_label'] if 'ansible_loop' in include_result: task_vars['ansible_loop'] = special_vars['ansible_loop'] = include_result['ansible_loop'] if original_task.no_log and '_ansible_no_log' not in include_args: task_vars['_ansible_no_log'] = special_vars['_ansible_no_log'] = original_task.no_log # get search path for this task to pass to lookup plugins that may be used in pathing to # the included file task_vars['ansible_search_path'] = original_task.get_search_path() # ensure basedir is always in (dwim already searches here but we need to display it) if loader.get_basedir() not in task_vars['ansible_search_path']: task_vars['ansible_search_path'].append(loader.get_basedir()) templar = Templar(loader=loader, variables=task_vars) if original_task.action in C._ACTION_ALL_INCLUDE_TASKS: include_file = None if original_task.static: continue if original_task._parent: # handle relative includes by walking up the list of parent include # tasks and checking the relative result to see if it exists parent_include = original_task._parent cumulative_path = None while parent_include is not None: if not isinstance(parent_include, TaskInclude): parent_include = parent_include._parent continue if isinstance(parent_include, IncludeRole): parent_include_dir = parent_include._role_path else: try: parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params'))) except AnsibleError as e: parent_include_dir = '' display.warning( 'Templating the path of the parent %s failed. The path to the ' 'included file may not be found. ' 'The error was: %s.' % (original_task.action, to_text(e)) ) if cumulative_path is not None and not os.path.isabs(cumulative_path): cumulative_path = os.path.join(parent_include_dir, cumulative_path) else: cumulative_path = parent_include_dir include_target = templar.template(include_result['include']) if original_task._role: new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path) candidates = [loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target), loader.path_dwim_relative(new_basedir, 'tasks', include_target)] for include_file in candidates: try: # may throw OSError os.stat(include_file) # or select the task file if it exists break except OSError: pass else: include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target) if os.path.exists(include_file): break else: parent_include = parent_include._parent if include_file is None: if original_task._role: include_target = templar.template(include_result['include']) include_file = loader.path_dwim_relative( original_task._role._role_path, 'handlers' if isinstance(original_task, Handler) else 'tasks', include_target, is_role=True) else: include_file = loader.path_dwim(include_result['include']) include_file = templar.template(include_file) inc_file = IncludedFile(include_file, include_args, special_vars, original_task) else: # template the included role's name here role_name = include_args.pop('name', include_args.pop('role', None)) if role_name is not None: role_name = templar.template(role_name) new_task = original_task.copy() new_task._role_name = role_name for from_arg in new_task.FROM_ARGS: if from_arg in include_args: from_key = from_arg.replace('_from', '') new_task._from_files[from_key] = templar.template(include_args.pop(from_arg)) inc_file = IncludedFile(role_name, include_args, special_vars, new_task, is_role=True) idx = 0 orig_inc_file = inc_file while 1: try: pos = included_files[idx:].index(orig_inc_file) # pos is relative to idx since we are slicing # use idx + pos due to relative indexing inc_file = included_files[idx + pos] except ValueError: included_files.append(orig_inc_file) inc_file = orig_inc_file try: inc_file.add_host(original_host) except ValueError: # The host already exists for this include, advance forward, this is a new include idx += pos + 1 else: break return included_files
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
lib/ansible/playbook/task_include.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ansible.constants as C from ansible.errors import AnsibleParserError from ansible.playbook.attribute import FieldAttribute from ansible.playbook.block import Block from ansible.playbook.task import Task from ansible.utils.display import Display from ansible.utils.sentinel import Sentinel __all__ = ['TaskInclude'] display = Display() class TaskInclude(Task): """ A task include is derived from a regular task to handle the special circumstances related to the `- include: ...` task. """ BASE = frozenset(('file', '_raw_params')) # directly assigned OTHER_ARGS = frozenset(('apply',)) # assigned to matching property VALID_ARGS = BASE.union(OTHER_ARGS) # all valid args VALID_INCLUDE_KEYWORDS = frozenset(('action', 'args', 'collections', 'debugger', 'ignore_errors', 'loop', 'loop_control', 'loop_with', 'name', 'no_log', 'register', 'run_once', 'tags', 'timeout', 'vars', 'when')) # ================================================================================= # ATTRIBUTES _static = FieldAttribute(isa='bool', default=None) def __init__(self, block=None, role=None, task_include=None): super(TaskInclude, self).__init__(block=block, role=role, task_include=task_include) self.statically_loaded = False @staticmethod def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None): ti = TaskInclude(block=block, role=role, task_include=task_include) task = ti.check_options( ti.load_data(data, variable_manager=variable_manager, loader=loader), data ) return task def check_options(self, task, data): ''' Method for options validation to use in 'load_data' for TaskInclude and HandlerTaskInclude since they share the same validations. It is not named 'validate_options' on purpose to prevent confusion with '_validate_*" methods. Note that the task passed might be changed as a side-effect of this method. ''' my_arg_names = frozenset(task.args.keys()) # validate bad args, otherwise we silently ignore bad_opts = my_arg_names.difference(self.VALID_ARGS) if bad_opts and task.action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_TASKS: raise AnsibleParserError('Invalid options for %s: %s' % (task.action, ','.join(list(bad_opts))), obj=data) if not task.args.get('_raw_params'): task.args['_raw_params'] = task.args.pop('file', None) if not task.args['_raw_params']: raise AnsibleParserError('No file specified for %s' % task.action) apply_attrs = task.args.get('apply', {}) if apply_attrs and task.action not in C._ACTION_INCLUDE_TASKS: raise AnsibleParserError('Invalid options for %s: apply' % task.action, obj=data) elif not isinstance(apply_attrs, dict): raise AnsibleParserError('Expected a dict for apply but got %s instead' % type(apply_attrs), obj=data) return task def preprocess_data(self, ds): ds = super(TaskInclude, self).preprocess_data(ds) diff = set(ds.keys()).difference(self.VALID_INCLUDE_KEYWORDS) for k in diff: # This check doesn't handle ``include`` as we have no idea at this point if it is static or not if ds[k] is not Sentinel and ds['action'] in C._ACTION_ALL_INCLUDE_ROLE_TASKS: if C.INVALID_TASK_ATTRIBUTE_FAILED: raise AnsibleParserError("'%s' is not a valid attribute for a %s" % (k, self.__class__.__name__), obj=ds) else: display.warning("Ignoring invalid attribute: %s" % k) return ds def copy(self, exclude_parent=False, exclude_tasks=False): new_me = super(TaskInclude, self).copy(exclude_parent=exclude_parent, exclude_tasks=exclude_tasks) new_me.statically_loaded = self.statically_loaded return new_me def get_vars(self): ''' We override the parent Task() classes get_vars here because we need to include the args of the include into the vars as they are params to the included tasks. But ONLY for 'include' ''' if self.action not in C._ACTION_INCLUDE: all_vars = super(TaskInclude, self).get_vars() else: all_vars = dict() if self._parent: all_vars.update(self._parent.get_vars()) all_vars.update(self.vars) all_vars.update(self.args) if 'tags' in all_vars: del all_vars['tags'] if 'when' in all_vars: del all_vars['when'] return all_vars def build_parent_block(self): ''' This method is used to create the parent block for the included tasks when ``apply`` is specified ''' apply_attrs = self.args.pop('apply', {}) if apply_attrs: apply_attrs['block'] = [] p_block = Block.load( apply_attrs, play=self._parent._play, task_include=self, role=self._role, variable_manager=self._variable_manager, loader=self._loader, ) else: p_block = self return p_block
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
test/integration/targets/include_import/undefined_var/playbook.yml
--- - hosts: testhost gather_facts: false tasks: - include_tasks: "include_tasks.yml" ignore_errors: True register: "_include_tasks_result" when: - "_undefined == 'yes'" - assert: that: - "_include_tasks_result is failed" - "_include_tasks_task_result is not defined" msg: "'include_tasks' did not evaluate it's attached condition and failed" - include_role: name: "no_log" ignore_errors: True register: "_include_role_result" when: - "_undefined == 'yes'" - assert: that: - "_include_role_result is failed" msg: "'include_role' did not evaluate it's attached condition and failed" - include: include_that_defines_var.yml static: yes when: - "_undefined == 'yes'" - assert: that: - _include_defined_result == 'good'
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
test/integration/targets/includes/roles/test_includes/tasks/branch_toplevel.yml
# 'canary2' used instead of 'canary', otherwise a "recursive loop detected in # template string" occurs when both includes use static=yes - include: 'leaf_sublevel.yml canary2={{ canary }}' static: yes when: 'nested_include_static|bool' # value for 'static' can not be a variable, hence use 'when' - include: 'leaf_sublevel.yml canary2={{ canary }}' static: no when: 'not nested_include_static|bool'
closed
ansible/ansible
https://github.com/ansible/ansible
74,135
helpers contains deprecated call to be removed in 2.12
##### SUMMARY helpers contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal ``` lib/ansible/playbook/helpers.py:158:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:255:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:298:24: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) lib/ansible/playbook/helpers.py:336:20: ansible-deprecated-version: Deprecated version ('2.12') found in call to Display.deprecated or AnsibleModule.deprecate (0%) ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ``` lib/ansible/playbook/helpers.py ``` ##### ANSIBLE VERSION ``` 2.12 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS N/A
https://github.com/ansible/ansible/issues/74135
https://github.com/ansible/ansible/pull/74809
27f61db86b69743181529dd6ee34951b244e075e
d27ce4cef30b87defaccdaaa0039ee18a3f4cce2
2021-04-05T20:33:57Z
python
2021-05-25T15:35:17Z
test/integration/targets/includes/roles/test_includes/tasks/main.yml
# test code for the ping module # (c) 2014, James Cammarata <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. - include: included_task1.yml a=1 b=2 c=3 - name: verify non-variable include params assert: that: - "ca == '1'" - "cb == '2'" - "cc == '3'" - set_fact: a: 101 b: 102 c: 103 - include: included_task1.yml a={{a}} b={{b}} c=103 - name: verify variable include params assert: that: - "ca == 101" - "cb == 102" - "cc == 103" # Test that strings are not turned into numbers - set_fact: a: "101" b: "102" c: "103" - include: included_task1.yml a={{a}} b={{b}} c=103 - name: verify variable include params assert: that: - "ca == '101'" - "cb == '102'" - "cc == '103'" # now try long form includes - include: included_task1.yml vars: a: 201 b: 202 c: 203 - debug: var=a - debug: var=b - debug: var=c - name: verify long-form include params assert: that: - "ca == 201" - "cb == 202" - "cc == 203" - name: test handlers with includes shell: echo 1 notify: # both these via a handler include - included_handler - verify_handler - include: branch_toplevel.yml canary=value1 nested_include_static=no static: no - assert: that: - 'canary_fact == "value1"' - include: branch_toplevel.yml canary=value2 nested_include_static=yes static: no - assert: that: - 'canary_fact == "value2"' - include: branch_toplevel.yml canary=value3 nested_include_static=no static: yes - assert: that: - 'canary_fact == "value3"' - include: branch_toplevel.yml canary=value4 nested_include_static=yes static: yes - assert: that: - 'canary_fact == "value4"'